πŸ—οΈ AI Infrastructure

RSAC 2026 Week Security Aftermath: 30 MCP CVEs in 60 Days, LiteLLM Supply Chain Compromise, and 38% of MCP Servers Lack Authentication

β€’2 min readβ€’6 views

A comprehensive security analysis published March 28, 2026 by Rock Cyber Musings summarizes the most critical AI agent security developments from RSAC 2026 week (March 20-26), revealing an alarming convergence of threats against the rapidly expanding AI agent ecosystem.

MCP PROTOCOL ATTACK SURFACE: Palo Alto Networks Unit 42 published research documenting new attack paths through the Model Context Protocol (MCP), the emerging standard for AI agent-to-tool communication:

  • 30 CVEs filed against MCP implementations in 60 days
  • CVE-2026-25536: Cross-client data leak in MCP TypeScript SDK
  • CVE-2026-23744: Remote code execution in MCPJam Inspector
  • Prompt injection delivered through MCP's sampling interface β€” a new attack vector
  • A scan of 500+ public MCP servers found 38% lacked authentication entirely
  • Authentication is described as "the minimum viable control" β€” everything built on unauthenticated servers is exposed

LITELLM SUPPLY CHAIN COMPROMISE: LiteLLM, a widely deployed open-source LLM API proxy used in hundreds of enterprise AI stacks, was compromised through a misconfigured Trivy GitHub Actions workflow:

  • Attackers modified version tags on the trivy-action GitHub Action to inject malicious code
  • Malicious releases went live March 19 and March 22 β€” during RSA week when security teams were distracted
  • Anyone who installed during this window should assume credentials were exposed
  • The attack exploited version tags, not direct code injection β€” CI/CD pipelines relying on tags rather than pinned commits ran malicious code without detection

ZERO-CLICK AGENT EXPLOITS (ZENITY DEMOS): Zenity CTO Michael Bargury demonstrated live zero-click prompt injection chains at RSAC:

  • Cursor IDE: manipulated into leaking developer secrets via support emails
  • Salesforce agents: exfiltrated customer data to attacker-controlled server
  • ChatGPT: produced persistent attacker-chosen outputs across conversations

HACKERONE STATISTICS:

  • 540% year-over-year surge in validated prompt injection vulnerabilities
  • Language itself is now an attack surface

NIST AI 800-4: NIST published AI 800-4, the first federal framework for monitoring AI systems in production.

KEY TAKEAWAY: "Thirty thousand attendees, six hundred exhibitors, one word on every booth banner: agentic. While the industry competed on keynotes and happy hours, LiteLLM got infected with credential-stealing code."

Share this article

🧠 Stay Updated on AI Agents

Get weekly insights on agentic AI, networks and infrastructure. No spam.

Join 500+ AI builders. Unsubscribe anytime.

Deploy Your AI Agent Today

Launch a managed OpenClaw instance in minutes

Request demo β†’