Dario Amodei Drew an Ethics Line. Washington Refused to Pause.
The Pentagon wants Claude gone. The military doesn't — and the six-month phaseout is already running into resistance that has nothing to do with AI safety.

image from Gemini Imagen 4
The Pentagon and Anthropic were, by one account, close to resolving their disagreements. Then the clock ran out.
That picture emerges from sworn declarations Anthropic filed in a California federal court late last week, ahead of a hearing set for Tuesday before Judge Rita Lin. The declarations, from Anthropic head of policy Sarah Heck and head of public sector Thiyagu Ramasamy, describe a contract relationship that ended not with a sudden break but with a hard deadline: six months to wind down access to classified military networks, after months of negotiations over where to draw the line on AI weapons and mass surveillance.
The backstory is familiar by now. Anthropic, the AI company backed by Amazon and Google, signed a $200 million defense contract last July. Claude became the first AI model approved to operate on classified military networks. The relationship worked until the Pentagon pushed for more. The two sides disputed whether Claude could be used for fully autonomous weapons, mass domestic surveillance, or in Pentagon contracts more broadly. Anthropic said no. CEO Dario Amodei put it plainly in a February 26 statement: the company cannot in good conscience accede to the request.
Defense Secretary Pete Hegseth's response was swift. On March 3, he designated Anthropic a supply-chain risk, beginning a six-month phaseout from Pentagon systems. The message from Washington was clear: the Pentagon's timeline on AI deployment doesn't wait for lab-level consensus on where to draw the line.
But inside the building, the line is harder to walk away from than the designation suggests.
The Ground Reality
According to reporting by Reuters this week, career IT staff and contractors at the Defense Department are dragging their feet on the phaseout. One contractor told Reuters that Anthropic's Claude was the best option available and that xAI's Grok, the leading alternative, produced inconsistent answers to the same queries. Another official said tasks previously handled by Claude, like querying large datasets, are now being done manually in Microsoft Excel. Claude Code, used widely to write software code inside the Pentagon, is being replaced with something worse.
Career IT people at DoD hate this move because they had finally gotten operators comfortable using AI, one contractor said. They think it's stupid.
The transition is also technically painful. Palantir's Maven Smart Systems, the intelligence analysis and weapons targeting platform used by the Defense Department and other national security agencies, with contracts potentially worth over a billion dollars, was built using Anthropic's Claude Code. Palantir will now have to replace Claude with another AI model and rebuild parts of its software stack.
Recertification for a new AI model on classified or military networks takes twelve to eighteen months, according to Joe Saunders, the CEO of government contractor RunSafe Security. The six-month phaseout clock started running before the alternative is ready.
Some Pentagon staff are reportedly slow-rolling their replacement of Claude, betting the dispute gets resolved before the deadline. One federal chief information officer told Reuters the plan is to wind down Anthropic slowly, hoping the two sides reach an agreement.
The Court Filing
Anthropic is contesting the factual record, not just the designation. Heck's declaration pushes back on what she calls a central falsehood in the government's filings: that Anthropic demanded some kind of approval role over military operations. At no time during Anthropic's negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role, she wrote.
She also notes that the Pentagon's concern about Anthropic potentially disabling its technology mid-operation was never raised during negotiations. It appeared for the first time in the government's court filings, giving Anthropic no chance to respond.
Ramasamy's declaration addresses the kill switch claim more directly. Once Claude is deployed inside an air-gapped government system operated by a third-party contractor, Anthropic has no access to it. There is no remote kill switch, no backdoor, no mechanism to push unauthorized updates. Any change to the model would require the Pentagon's explicit approval and action to install.
Anthropic's core legal argument: the supply-chain risk designation, the first ever applied to an American AI company, amounts to government retaliation for the company's publicly stated views on AI safety, in violation of the First Amendment. The government disputes this, calling Anthropic's ethical red lines a business decision rather than protected speech, and the designation a straightforward national security call.
The timing is notable. Heck's declaration notes that on March 4, the day after the Pentagon formally finalized its supply-chain risk designation, Under Secretary Emil Michael emailed Amodei to say the two sides were very close on the two issues the government now cites as evidence that Anthropic is a national security threat. A week after that email, Michael told CNBC there was no chance of renewed talks.
The Vatican's Voice
The dispute plays out against a backdrop of broader ethical debate about AI and warfare that has drawn in unlikely interlocutors. The Vatican has been raising alarms about autonomous weapons for over a decade. In an address to G7 leaders in Rome last July, Pope Francis supported a ban on lethal autonomous weapons systems. We would condemn humanity to a future without hope if we took away people's ability to make decisions about themselves and their lives, by dooming them to depend on the choices of machines, he said.
This week, Monsignor Daniel Pacho, an official of the Vatican Secretariat of State, told a UN conference in Geneva that when autonomous weapons become the combatants, the unique human capacity for moral judgment and ethical decision-making disappears, as does the burden of responsibility, dangerously lowering the threshold for conflict.
Anthropic's position, no AI for autonomous kill chains, no mass surveillance of Americans, aligns with that concern. Whether it survives legal and political scrutiny is another matter.
The Gap
What's striking is the distance between the narrative coming from the secretary of defense's office and the operational reality inside the Pentagon. Hegseth has said he wants an AI first warfighting force with no ideological constraints that limit lawful military applications. The department, he said in a SpaceX speech last month, will not employ AI models that won't allow you to fight wars.
But the wars are being fought right now with tools that work, and Claude, by most accounts inside the building, works better than what's available as a replacement. The six-month phaseout isn't just a policy decision. It's a disruption to actual military operations, including, according to Reuters, support for U.S. military operations during the recent conflict with Iran.
The administration that swept away Biden-era AI guardrails with an executive order, that prohibited states from establishing their own AI safety rules, that promised to move fast in defense, is discovering that moving fast in practice means something different than moving fast in a press release.
Anthropic and the Pentagon go back to court Tuesday. In the meantime, the missions aren't pausing, and somewhere in the Defense Department, someone is doing target analysis in Excel because the AI that was actually good enough got blacklisted for being too careful about who it kills.

