πŸ€– Agentic AI

"Claudy Day" Attack Chain: Oasis Security Discloses Three Chained Claude.ai Vulnerabilities Enabling Silent Data Exfiltration via Google Ads

β€’3 min readβ€’1 views

On March 31, 2026, Oasis Security published a detailed disclosure of three vulnerabilities in Claude.ai and the broader claude.com platform, collectively dubbed "Claudy Day". When chained together, these create a complete attack pipeline from targeted victim delivery to silent data exfiltration β€” requiring no MCP servers, tools, or integrations.

The Three Vulnerabilities:

  1. Invisible Prompt Injection via URL Parameters: Claude.ai allows users to open a new chat with a pre-filled prompt via a URL parameter (claude.ai/new?q=...). Oasis researchers discovered that certain HTML tags could be embedded in this parameter, invisible in the text box but fully processed by Claude when the user hit Enter. An attacker could hide arbitrary instructions including data-extraction commands inside what appears to be a normal prompt.

  2. Data Exfiltration via Anthropic Files API: Claude code execution sandbox restricts outbound network access but allows connections to api.anthropic.com. By embedding an attacker-controlled API key in the hidden prompt, researchers instructed Claude to search the user conversation history for sensitive information, write it to a file, and upload it to the attacker Anthropic account via the Files API. No external tools needed β€” just capabilities that ship out of the box.

  3. Open Redirect on claude.com: Any URL in the form claude.com/redirect/<target> would redirect the visitor without validation, including to arbitrary third-party domains. Combined with Google Ads hostname validation, this allowed attackers to place search ads displaying a trusted claude.com URL that silently redirected victims to the injection URL.

Attack Scope and Impact: Even in bare-bones Claude.ai sessions, the agent can access sensitive information through conversation history and memory. Through prompt injection, attackers could instruct Claude to summarize previous conversations to build user profiles, extract chats on targeted topics (mergers, medical concerns), or dump what Claude considers sensitive. With MCP servers or enterprise integrations enabled, the blast radius expands dramatically β€” reading files, sending messages, accessing APIs.

Using Google Ads targeting (location, industry, demographics, Customer Match email targeting), attackers could turn this from broad social engineering into precision strikes against known targets.

Remediation Status: All findings were responsibly reported to Anthropic through their Responsible Disclosure Program. The prompt injection issue has been fixed. The open redirect and Files API exfiltration vectors are currently being addressed.

Broader Implications: This attack demonstrates that AI agent security is not just about protecting against external prompt injection β€” it is about the fundamental architecture of how agents access user data, authenticate to APIs, and handle URL-based input. The attack required zero integrations and worked against the most basic Claude.ai setup.

Share this article

🧠 Stay Updated on AI Agents

Get weekly insights on agentic AI, networks and infrastructure. No spam.

Join 500+ AI builders. Unsubscribe anytime.

Deploy Your AI Agent Today

Launch a managed OpenClaw instance in minutes

Request demo β†’