The Pentagon told a federal court that Anthropic's AI poses a systemic national security risk. NSA is using it anyway.
That contradiction is the heart of a sealed lawsuit Anthropic filed against the Defense Department in March — the only place where the government's actual arguments for why Mythos should be restricted have been made, and where the case for why NSA keeps access has been explained in full. Nobody outside the courtroom has been able to read them.
What is known: the Pentagon applied a supply-chain risk designation to Anthropic in February, a label that restricts how federal contractors can use the company's tools. Anthropic filed suit challenging the designation, arguing it does not meet the legal threshold for restriction. A federal judge in California temporarily blocked the designation in March; the D.C. Circuit Court of Appeals declined to extend that block this month, leaving it in place while the case proceeds. Anthropic's core legal theory, per reporting across TechCrunch, CNBC, and the New York Times, is that the designation was improperly applied and is possibly retaliatory. What remains sealed: the specific national security basis the government has claimed for keeping Mythos restricted, and the specific legal mechanism that allows NSA — uniquely among federal agencies — to keep running it.
The contradiction is also the policy story. NSA is using Mythos under authorities that bypass the procurement restrictions that apply to the rest of the Defense Department, confirmed by Reuters and TechCrunch. CISA — the federal cybersecurity agency that would lead any defensive deployment of a model like Mythos — has no access, as Axios reported. The agency declined to comment on Mythos. CISA's lack of access is structural, not the result of a formal denial that would show up in any public record.
The model itself crossed a threshold no prior AI had reached: it completed a 32-step network attack simulation, solving 3 out of 10 attempts, and succeeds on 73 percent of expert-level cybersecurity challenges that every previous AI failed, according to the UK AI Security Institute's evaluation. Mythos has already identified thousands of high-severity vulnerabilities across major operating systems and web browsers — bugs that remain unpatched. The UK AI Security Institute confirmed it has access to the same model. CISA does not.
The policy vacuum exists at a moment when CISA's capacity to act on any access it might eventually gain is itself in question. The agency has seen significant staffing and resource reductions under the Trump administration, cutting the federal cyber defense capacity that would be needed to operationalize a tool like Mythos. Having the model on paper and having the people and infrastructure to deploy it defensibly are separate problems.
Anthropic's position has also evolved. Last Friday, chief executive Dario Amodei met with White House chief of staff Susie Wiles and Treasury Secretary Scott Bessent, as Axios reported. The administration called the meeting productive, suggesting the relationship is in a different phase than it was when the lawsuit was filed. Whether that produces a settlement, a reversed designation, or no change at all depends partly on arguments inside that sealed record — arguments that, if they ever become public, may reveal whether the national security case against Mythos was ever actually made, or whether it was a procurement dispute dressed in systemic-risk language.
The case is pending. So is any formal decision on CISA access. What is not in question is that the gap exists, that it is structural rather than accidental, and that it widens each month a model capable of finding vulnerabilities stays in the hands of one federal cyber agency and not the other.