🤖 Agentic AI

'Claudy Day' Exploit Chain Discovered in Claude.ai — Invisible Prompt Injection Enables Silent Data Exfiltration via Files API

2 min read1 views

On March 18, 2026, security researchers at Oasis Security publicly disclosed a critical exploit chain dubbed 'Claudy Day' affecting Anthropic's Claude.ai platform. The vulnerability chains three separate flaws into a complete attack pipeline that enables silent data exfiltration from user sessions.

The Three Chained Vulnerabilities:

  1. Invisible Prompt Injection via URL Parameters: Hidden instructions can be embedded in Claude.ai URLs through URL parameters. When a victim clicks such a link and submits any prompt, the hidden instructions are silently processed by the system alongside the user's input.

  2. Open Redirect on claude.com: An open redirect vulnerability on the claude.com domain can be combined with search engine advertisements to deliver malicious links that appear legitimate. Victims see a trusted claude.com URL.

  3. Data Exfiltration via Anthropic Files API: The hidden prompts can include attacker-controlled API keys that instruct Claude to package the victim's conversation history and upload it to an attacker-controlled Anthropic account via the Files API — without requiring external tools or additional integrations. The Files API supports up to 500 MB per file and 100 GB per organization.

Attack Flow:

  • Attacker creates a Claude.ai URL with hidden prompt injection instructions
  • URL is distributed via search ads or phishing (appearing as legitimate claude.com)
  • Victim clicks link and uses Claude normally
  • Hidden instructions silently package conversation history
  • Data is uploaded to attacker's Anthropic account via Files API
  • No visible indication of compromise to the user

Impact Assessment: In a default session, the agent can access conversation history and memory containing sensitive user information. If enterprise integrations, MCP servers, or specialized tools are enabled, the blast radius expands exponentially — attackers could read internal files, interact with connected APIs, and transmit messages autonomously.

Patch Status: Anthropic has patched the prompt injection vulnerability and is mitigating the remaining structural issues. However, security experts warn that organizations must proactively audit all connected agent integrations.

This follows a pattern of Claude security issues: Claude Code RCE flaws (February 2026), Anthropic Cowork vulnerability (January 2026), and MCP Inspector RCE (July 2025).

Coverage appeared simultaneously from DarkReading, TechNadu, and Hackread on March 18, 2026.

Share this article

🧠 Stay Updated on AI Agents

Get weekly insights on agentic AI, networks and infrastructure. No spam.

Join 500+ AI builders. Unsubscribe anytime.

Deploy Your AI Agent Today

Launch a managed OpenClaw instance in minutes

Request demo →