Anthropic spent three weeks knowing its build pipeline had a bug that could expose proprietary code, and did not fix it. On March 31, 2026, the company shipped Claude Code version 2.1.88 to npm, the public package registry used by millions of developers, with a source map file that gave anyone who downloaded it a roadmap to 2,000 TypeScript files and more than 512,000 lines of internal code, The Hacker News reported. The company confirmed the incident the same day, calling it human error rather than a security breach, The Guardian confirmed.
The breach was not the story. The breach was predictable.
Bun, the JavaScript runtime that Anthropic acquired in late 2025 and uses to build Claude Code, generates source maps by default. A source map is a debugging artifact that maps compiled code back to its original source, useful during development but never meant to ship in production. On March 11, twenty days before the leak, a developer had filed a bug report with Bun's public tracker documenting that production builds were shipping source maps when they should not have been, DEV Community reported. The bug was still open when Claude Code 2.1.88 went out. Anthropic owned the runtime, owned the bug tracker, and had three weeks to close the issue. It did not.
Within hours of the leak going public, developers had mirrored the full codebase across GitHub. One archived copy accumulated more than 84,000 stars and 82,000 forks within days, The Hacker News reported, making it one of the fastest-growing repositories in GitHub history. Security researcher Chaofan Shou first flagged the exposure on X; the post reached 28.8 million views.
The npm registry is not a passive ecosystem. Within hours of the source appearing on GitHub, attackers had begun registering typosquat packages on npm, mimicking Anthropic's internal tooling to catch developers who tried to compile the leaked code locally, Dark Reading documented. The supply chain attack built on top of an accidental leak, all within 24 hours.
What the code revealed
The readable source collapsed the cost of reverse-engineering Claude Code. Developers who had been working from minified bundles could now see the full logic. Several features Anthropic had not announced publicly surfaced in the code.
One, labeled KAIROS in the source, describes a persistent background agent mode that can run tasks without waiting for user input and send push notifications to the developer, The Hacker News reported. Another, called Undercover Mode, includes system prompts instructing Claude Code to make contributions to open-source repositories without revealing its Anthropic affiliation. The commit message instructions in the source read: "You are operating UNDERCOVER in a PUBLIC/OPEN-SOURCE repository. Your commit messages, PR titles, and PR bodies MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."
Security firm Straiker published an analysis of what the source revealed about Claude Code's context management pipeline. The pipeline handles the fixed context window problem by progressively compressing old conversation material through four stages: tool result budgeting, microcompact, context collapse, and autocompact, Straiker documented. Each stage has different rules for what stays and what goes. Straiker's analysis identified a laundering path in the autocompact stage: when the model summarizes a conversation to free up context space, it is instructed to preserve "all user messages that are not tool results." If an attacker had injected instruction-like content into a file the model had read earlier in the session, that content could survive the compaction process and be treated as a genuine user directive in the compressed summary. The model would not be jailbroken. It would be cooperative and following what it believed were user instructions baked into context. Straiker called this a "context poisoning" vector.
The pipeline was not broken. It was working exactly as designed. The design had a gap.
Anthropic's response was consistent with its existing framing: human error, not a breach. No customer data or credentials were exposed, an Anthropic spokesperson confirmed. The package was removed from npm. The company issued copyright takedown requests to suppress distribution of the mirrored code.
The legal track
The npm incident landed against a backdrop of Anthropic's ongoing dispute with the Defense Department. On February 27, 2026, Defense Secretary Pete Hegseth designated Anthropic a supply chain risk, a designation never before applied to an American company, Wired reported. The label blocks Pentagon contractors from using Anthropic's AI models on defense contracts. Anthropic sued, arguing the designation was unconstitutional punishment. On March 26, a California federal judge blocked the designation, ruling that the government's actions had violated the company's constitutional rights, CNN reported. On April 8, the D.C. Circuit lifted the injunction pending full litigation. The supply chain risk label remains in place while the case proceeds, Politico reported.
Anthropic's position as the safety-first AI company is not incidental to its commercial story. It is the commercial story. Enterprise customers and government agencies deciding whether to trust a model with sensitive workloads are making a bet on the company's operational discipline, not just its benchmark scores. A company that acquires a critical dependency, tracks a known bug in that dependency for three weeks, and ships proprietary code through the same pipeline has an operational credibility problem that its safety research cannot paper over.
The npm registry now has a permanent record of what Claude Code looked like on March 31, 2026. The typosquat packages are still a live threat. The bug in Bun remains open.