Every outlet that covered Anthropic's Pentagon fight this week cited the same thing: the company told an appeals court it cannot manipulate its own AI once deployed in classified networks. What they quoted was a summary. The full argument is in a 96-page brief filed April 22 in the D.C. Circuit that nobody else has reported from directly. Reading it reveals a legal strategy that is also a product liability.
The company filed a 96-page appellate brief on April 22 in the D.C. Circuit, arguing the government's security concerns are technically unfounded. Anthropic cannot tamper with a system it no longer controls, the brief argues. The supply-chain-risk label, a designation historically used to block foreign adversaries like China from accessing military systems, does not apply to a company that has been locked out of its own AI.
It is a coherent legal position. It is also a product problem. Anthropic has built Claude partly on the premise that it is a system humans can steer and audit after deployment, the very guarantee enterprise customers are paying for. The company's court filing acknowledges that guarantee weakens the moment the Pentagon puts Claude to use: once deployed in classified environments, alignment techniques, the methods used to make AI systems behave as intended, may stop working. The company calls this a security feature. Enterprise buyers call it a limitation.
The framing is also commercially awkward. Anthropic has spent years marketing alignment as a feature, not a constraint. In court, the same property that makes alignment valuable to customers is what Anthropic is pointing to as proof the government has nothing to worry about.
The Pentagon canceled its $200 million contract with Anthropic after their disagreement; OpenAI subsequently signed a deal to provide its technology to the U.S. military. Anthropic filed the April 22 brief to address questions the D.C. Circuit panel raised at an earlier hearing, where the court declined to block the Pentagon's actions while the case proceeds.
A separate federal court in San Francisco ruled in Anthropic's favor in March. Judge Rita Lin found the Pentagon appeared to have unlawfully retaliated against Anthropic for its public positions on AI safety, writing that nothing in the governing statute supports "the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," according to CNBC. The ruling prompted the Trump administration to remove the stigmatizing labels. The D.C. Circuit case remains unresolved. The underlying docket is on CourtListener.
Microsoft, Google, Amazon, Apple, and OpenAI have all filed amicus briefs in support of Anthropic's challenge, according to The Guardian. The coalition reflects broader industry concern about how procurement statutes apply to AI companies whose models operate at arm's length from their creators.
What to watch at the May 19 hearing: whether the D.C. Circuit panel finds Anthropic's immutability claim credible. The Pentagon has not yet filed its response brief. If the government produces evidence that Claude can be updated, prompted, or retrieved post-deployment — or if the court simply finds the claim unverified — Anthropic's legal position collapses. The company would then be arguing simultaneously that its AI is too uncontrollable to deploy and too uncontrollable to trust. Neither version of Claude is easy to sell.