OpenAI spent the first five years of its existence building consumer software. ChatGPT reached 100 million users in two months. The company signed hundreds of millions of people up for AI assistants. That turnstile is now the setup for a different business.
In February 2026, OpenAI signed a $200 million contract with the Department of War to deploy its AI models on classified government networks, according to NPR. Hours earlier, President Trump had designated rival AI company Anthropic a national security risk, effectively blocking it from federal contracts. OpenAI's deal, by contrast, landed the day Anthropic got locked out.
The financial logic is direct. OpenAI expected $5 billion in losses in 2024 against $3.7 billion in revenue, according to MIT Technology Review. The company was burning cash faster than it could make it. The defense business offered volume contracts with a customer that cannot go elsewhere: the U.S. government, particularly its intelligence agencies, has deep classification requirements that keep most commercial cloud providers out. OpenAI pitched itself as the trusted vendor for that tier.
The military pivot did not happen overnight. OpenAI began laying the groundwork in January 2024, when it quietly revised its usage policies to remove a blanket prohibition on military applications of its AI models, Jacobin reported. The company had previously stated its technology could not be used "for military and warfare purposes." That line disappeared.
From there, OpenAI ran a sustained hiring campaign to build Washington credibility. It brought on Katrina Mulligan as head of national security partnerships in February 2024. In June 2024, it added Gen. Paul Nakasone, the retired four-star general who had run the National Security Agency and U.S. Cyber Command from 2018 to 2024, to its board. By July 2025, it had hired Joseph Larson, a former executive at defense-tech company Anduril and a former deputy chief digital and artificial intelligence officer at the Pentagon. In total, OpenAI hired over a dozen bipartisan government insiders with national security credentials, according to Lever News.
The Anduril partnership, announced in December 2024, put OpenAI technology directly into a weapons-making company. Anduril's product line includes autonomous drones, a loitering munition called Roadrunner, and an autonomous mission system called Lattice that the company markets for battlefield use, according to Anduril's website. OpenAI's participation in that supply chain is a concrete escalation from consulting contracts to embedded technology.
Sam Altman, OpenAI's chief executive, acknowledged the optics were rough. "We were genuinely trying to de-escalate things and avoid a much worse outcome," he said in a March 2026 interview with CNBC, "but I think it just looked opportunistic and sloppy." He was right about the second part. The sequencing was not subtle.
OpenAI has published three red lines it says its Pentagon work will respect: no mass domestic surveillance, no autonomous weapons, and no high-stakes automated decisions, according to the OpenAI blog. The Electronic Frontier Foundation (EFF), a digital rights organization that reviewed the contract language, was not reassured. Lawyers for the EFF noted that key terms like "consistent with applicable laws," "deliberately," and "unconstrained" create enough ambiguity to protect OpenAI from accountability if a client pushes toward one of those three use cases. The EFF called them "weasel words" that "create ambiguity that protects one side or another from real accountability for contract violations."
The consumer backlash was measurable. ChatGPT uninstalls surged 295 percent in the days following the Pentagon deal announcement, according to CNBC, as users migrated to competitor products. OpenAI's consumer base includes a substantial cohort that joined specifically because the company framed itself as an alternative to defense-adjacent technology companies. That cohort's reaction to the pivot was immediate and public.
What OpenAI built in 2024 and 2025 was a specific kind of insurance. Washington connections, a bipartisan roster of former national security officials, and a policy change that predated the contract by 14 months. The deal with the Department of War was not opportunistic, whatever Altman said. It was the product of a deliberate, 14-month lobbying and personnel operation. The company had decided it needed the defense market before the defense market needed it — and then it waited for the moment the door opened.
The door opened when Anthropic got banned. OpenAI walked through.