Anthropic-Pentagon Dispute Reverberates in European Capitals
The standoff between Anthropic and the Pentagon has escalated into one of the most consequential confrontations in the short history of the AI industry—a clash that is reshaping how Washington thinks about its relationship with frontier AI companies. At the center of the dispute: two red lines A...

image from Gemini Imagen 4
The standoff between Anthropic and the Pentagon has escalated into one of the most consequential confrontations in the short history of the AI industry—a clash that is reshaping how Washington thinks about its relationship with frontier AI companies.
At the center of the dispute: two red lines Anthropic CEO Dario Amodei drew in sand. His company, which secured a $200 million contract to deploy Claude within classified government systems, would not permit its AI to be used for mass surveillance of Americans or for lethal autonomous weapons that fire without meaningful human oversight.
The Pentagon wanted unrestricted access. Defense Secretary Pete Hegseth gave Amodei until 5:01 p.m. on Friday, February 27, to relent. He did not.
Within hours, President Trump directed all federal agencies to stop using Anthropic products. Hegseth designated the company a supply-chain risk to national security—a designation historically reserved for foreign adversaries like Huawei.
The legal fallout has been swift and pointed. Anthropic filed suit in California federal court, arguing the administration violated its First Amendment rights and punished the company for its ideological stance. The government fired back with a 40-page response this week, claiming Anthropic poses an "unacceptable risk to national security" because it might "attempt to disable its technology or preemptively alter the behavior of its model" during warfighting operations if its red lines were crossed.
That argument is not landing well with legal experts.
"The government is relying completely on conjectural, speculative imaginings to justify a very, very serious legal step they have taken against Anthropic," said Chris Mattei, a former Justice Department attorney specializing in First Amendment issues. The DOD, he noted, has produced no investigation supporting its claim that Anthropic would sabotage its own systems mid-operation.
Silicon Valley has not stayed quiet. OpenAI, Google, and Microsoft employees—competitors and investors in Anthropic alike—have filed amicus briefs in support of the company. Industry associations representing hundreds of firms urged a court to pause the blacklisting. The argument from the tech industry: if the Pentagon can label a safety-conscious AI company a supply-chain risk for drawing ethical lines, no vendor is safe.
The geopolitical timing is delicate. Claude has been deployed in the ongoing war against Iran, according to the Washington Post. Reports emerged earlier that the model was used in the invasion of Venezuela and the capture of Nicolas Maduro—deployments that reportedly prompted Anthropic to reinforce its usage restrictions with the Pentagon in the first place.
The Pentagon is not waiting for the courts to resolve the dispute. Cameron Stanley, the DOD's chief digital and AI officer, told Bloomberg the department is "actively pursuing multiple LLMs into appropriate government-owned environments" and that engineering work has begun on replacements. OpenAI and Elon Musk's xAI have already signed agreements to fill the gap.
A hearing on Anthropic's request for a preliminary injunction is scheduled for Tuesday, March 24. If the court sides with Anthropic, the supply-chain risk designation could be temporarily blocked pending a full trial. If it sides with the government, the blacklisting stands—and the precedent for how Washington treats AI vendors that set ethical limits on military use will be firmly set.
What is not in dispute: the US government wants AI that can fight wars without constraints, and one of America's leading AI companies says there are lines it will not cross. That collision course has arrived ahead of schedule.

