Sam Altman has a new word for the moment OpenAI stopped being a safety company: miscalibrated.
In an interview with Laurie Segall on the Mostly Human podcast released Thursday, the OpenAI CEO admitted he got the temperature wrong on public distrust of the Pentagon deal — the classified contract announced February 28, the same day the US struck Iran and hours after the Pentagon said it would sever ties with Anthropic over ethical concerns.
"I think there's at least a group of loud people online who really don't trust the government to follow the law," Altman told Segall. "And that feels like a very bad sign for our democracy."
The miscalibration framing is a tidy exit ramp. It suggests the deal itself was sound — the problem was communication. But the record shows something more complete.
OpenAI's founders built the company on a specific promise: that artificial general intelligence should benefit humanity, and that dangerous military applications were off the table. Greg Brockman and his wife Anna donated $25 million to Donald Trump's MAGA Inc. super PAC in September 2025, five months before the Pentagon announcement. When that became public, Anthropic CEO Dario Amodei called Altman mendacious in a leaked memo, accusing him of giving "dictator-style praise" to the same administration now awarding him the contract.
Caitlin Kalinowski, OpenAI's robotics and hardware lead, resigned March 7 citing concerns about "surveillance of Americans without judicial oversight and lethal autonomy without human authorization." Her resignation letter named the two fears that safety researchers had raised from the day the deal was announced.
Ninety-eight OpenAI employees and 796 Google staff signed open letters protesting the contract. OpenAI responded with a statement claiming the Pentagon agreement had "more guardrails than any previous agreement for classified AI deployments, including Anthropics". That bar is not high. Anthropic was cut loose.
The guardrails Altman promised — no autonomous weapons, no domestic surveillance — do not appear in the contract's operational terms. According to The Guardian, Altman told employees explicitly: "You do not get to make operational decisions." The Pentagon does.
In the podcast, Altman pointed to the Manhattan Project, the Apollo Program, and the Interstate Highway System as precedents for government-led technological ambition. The analogy lands differently in 2026 than it did in 2023. The Manhattan Project was classified because the work itself was secret. OpenAI's deal is classified because the customer does not want scrutiny. These are not the same thing.
Defense procurement experts estimate the contract is worth between $500 million and $2 billion over five years. For that money, the Pentagon gets access to OpenAI's most capable models on classified networks. OpenAI gets a customer it cannot refuse, a legitimacy anchor, and a story about serving national security.
What it gave up is harder to price. The safety commitments that distinguished OpenAI from its competitors — the ones that attracted researchers like Kalinowski, the ones that made OpenAI worth building — are gone. Not suspended. Not reviewed. Gone.
Altman told Segall that "one of the most important questions the world will have to answer in the next year is: Are AI companies or are governments more powerful?" He said governments should be. The deal suggests he has already made his answer to that question.
The miscalibration, it turns out, was not about messaging. It was about timing — how long the company's stated values would survive contact with a $500 million contract.