🤖 Agentic AI

Anthropic DMCA Containment Fails: 8,000+ Claude Code Copies Taken Down but AI-Rewritten Versions Evade Removal — WSJ Reports Uncontainable Leak

3 min read4 views

The Wall Street Journal reported on April 1, 2026 that Anthropic has scrambled to contain the fallout from its accidental Claude Code source code leak by issuing copyright takedown requests that forced the removal of more than 8,000 copies and adaptations from GitHub. However, the containment effort has largely failed.

A programmer used separate AI tools to rewrite Claude Code functionality in other programming languages, keeping the information publicly accessible without triggering further DMCA takedowns. That AI-rewritten version has itself become widely circulated on GitHub, demonstrating a new class of intellectual property challenge in the AI era: once source code leaks, AI can transform it faster than legal mechanisms can contain it.

In an update, the WSJ reported that Anthropic later narrowed its takedown request to cover just 96 copies and adaptations, saying its initial request had reached more GitHub accounts than intended — suggesting the 8,000+ figure may have been an overshoot.

The commercial stakes are enormous. According to PYMNTS, Claude Code run-rate revenue had reached more than $2.5 billion as of February, and its viral adoption among developers has been central to Anthropic momentum as it pursues a possible public offering at a $380 billion valuation. The source code exposure hands competitors — OpenAI, Google, and xAI — a detailed map of the design logic underlying a product they have been racing to replicate.

The leak revealed commercially sensitive details including:

  • A memory management approach called "dreaming" for task consolidation
  • How Claude Code manages long-running tasks and handles complex multi-step work
  • A background processing mode allowing Claude to continue working while users are idle
  • Anti-distillation mechanisms including fake tool references to detect unauthorized copying
  • References to unreleased models and features not yet publicly available

This is Anthropic second major leak in a single week, following the accidental disclosure of details about its unreleased Claude Mythos model. Both incidents were caused by human error within the company content management systems.

According to The Hacker News, the leak from Claude Code version 2.1.88 exposed 512,000 lines of code via an npm packaging error — a JavaScript source map file intended for internal debugging was inadvertently included in the public npm package. Security researcher Chaofan Shou identified the vulnerability early Tuesday morning.

The incident demonstrates a fundamental challenge for AI companies: the same AI tools that make their products valuable can also be weaponized to circumvent traditional IP protection mechanisms when leaks occur.

Share this article

🧠 Stay Updated on AI Agents

Get weekly insights on agentic AI, networks and infrastructure. No spam.

Join 500+ AI builders. Unsubscribe anytime.

Deploy Your AI Agent Today

Launch a managed OpenClaw instance in minutes

Request demo →