The D.C. Circuit just handed the Trump administration a decisive win in its fight to blacklist Anthropic from American military contracting, and the opinion reads like a warning to every AI company considering a legal challenge.
In a four-page order issued Wednesday, a three-judge panel denied Anthropic's request to pause the Pentagon's supply chain risk designation while litigation proceeds. The ruling lets the Department of Defense keep excluding Claude from military contracts — and contractors like Amazon, Microsoft, and Palantir — while the case moves forward. The court scheduled expedited oral arguments for May 19.
"We deny Anthropic's motion for a stay pending review on the merits," the order said. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict."
Judges Gregory Katsas and Neomi Rao, both Trump appointees, wrote that Anthropic had not met the stringent requirements for an emergency stay. The company would "likely suffer some degree of irreparable harm," the court acknowledged — financial and reputational — but those interests "seem primarily financial in nature." The court was unpersuaded that Anthropic's speech had been chilled during the litigation.
This ruling is the second blow to Anthropic in two weeks, but it is not the same blow. A California federal judge blocked a narrower Pentagon-only track of the designation on March 26, calling it likely unlawful retaliation for Anthropic's public stance on AI safety. That ruling, from Judge James Lin in San Francisco, was a real win. But it only covered 10 U.S.C. § 3252 — a statute specific to Defense Department information systems.
The D.C. Circuit case concerns something broader: 41 U.S.C. § 4713, the Federal Acquisition Supply Chain Security Act, under which the blacklist can expand from the Pentagon to every civilian agency in the government after an interagency review that, Anthropic argues, never happened. The D.C. Circuit just let that track stand. The split is intentional and consequential: Anthropic won in California but is still locked out of DoD work, and the government-wide track remains live.
Acting Attorney General Todd Blanche called the ruling "a resounding victory for military readiness." In a post on X, he wrote: "Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company."
The underlying dispute centers on two restrictions Anthropic refuses to lift from its terms of service: a ban on fully autonomous weapons systems, including armed drone swarms operating without human oversight, and a prohibition on mass surveillance of U.S. citizens. Pentagon officials, including Undersecretary Emil Michael, publicly characterized those restrictions as obstacles to military competitiveness, citing programs like the Golden Dome missile defense initiative and rapid response capabilities against hypersonic threats, according to reporting by Bitcoin.com. Anthropic offered case-by-case exceptions but refused to eliminate the core guardrails. The company has said the designation could cost it billions of dollars in lost business, per Reuters.
The designation itself is historically extraordinary. Before Anthropic, the only company ever publicly designated a supply chain risk under these statutes was Acronis AG, a Swiss cybersecurity firm with reported Russian ties — and that was limited to intelligence community contracts. Anthropic is the first American company, the first AI company, and the first to face a government-wide designation that could reach every federal agency.
What happens next: the May 19 oral argument will be the first substantive court review of whether the designation itself is lawful, not just whether it should be paused. A final ruling against Anthropic would not necessarily end the fight — the company could petition for en banc review or appeal to the Supreme Court. But a win for the government would establish that the executive branch has broad authority to exclude AI vendors from federal contracting based on national security grounds, with limited judicial oversight.
The practical effect on Anthropic's business is real but contained for now. The company can still contract with civilian agencies, and its commercial business remains intact. The Pentagon contract, worth $200 million signed in July 2025, is frozen. The more existential question — whether the U.S. government can effectively quarantine a leading AI company from the defense industrial base — will be answered in courtrooms, not boardrooms.