Dario Amodei and Sam Altman once shared a house on Delano Avenue in San Francisco, along with Greg Brockman and his wife. By the end of that decade, the two men would be in federal court on opposite sides of a national security dispute, and a federal judge would use the word "Orwellian" to describe what the Trump administration had tried to do to Amodei's company.
The personal is not incidental. The rift between the two AI pioneers — which began as a disagreement over direction at OpenAI and calcified into a chasm over government contracts — is now the story of American AI policy. Both Anthropic and OpenAI are valued at more than $300 billion, both are hurtling toward public markets, and both have staked out irreconcilable positions on what their AI systems will and will not do for the U.S. government. The question of who was right in 2020 is now a question about who will control the most consequential technology of the next decade.
Amodei left OpenAI in late 2020 with seven employees, including his sister Daniela, after disagreements over Brockman's leadership role, the direction of GPT, and a persistent feeling of being undervalued — he had seriously considered quitting during the Musk-era layoffs in 2017, according to Business Insider. He founded Anthropic in early 2021. Altman stayed. The two men have not reconciled.
The rupture became public in March, when Defense Secretary Pete Hegseth designated Anthropic a supply chain risk — a designation typically reserved for foreign companies that pose national security concerns — after Anthropic refused to accept contract language requiring it to agree to "any lawful use" of its AI systems without restrictions on mass surveillance or autonomous weapons, according to The New York Times. Judge Rita F. Lin of the U.S. District Court for the Northern District of California blocked the designation with a preliminary injunction on March 26, writing that "nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," CNN reported.
OpenAI took a different path. It announced its own agreement with the Pentagon in March, with three stated red lines: no mass domestic surveillance, no autonomous weapons systems, and no high-stakes automated decisions like social credit systems. OpenAI described the arrangement as "cloud-only deployment with a safety stack that we run," with forward-deployed engineers in the loop and strong contractual protections rather than policy assurances, according to OpenAI's own blog. Altman had messaged staff before the deal closed: "Cannot believe I am trying so hard to save a CEO who has gone after me in the press," Axios reported.
The feud predates the Pentagon dispute by years. The Delano Avenue house is the origin story: Dario and Daniela Amodei lived there with Greg Brockman and his wife during OpenAI's early years, according to The Wall Street Journal via Livemint. A staffing conflict over GPT — Daniela Amodei, co-leading the project with Alec Radford, threatened to resign if Brockman joined — exposed a pattern that would define the split: Altman had made separate promises to each side. He told Amodei that Brockman and Ilya Sutskever would not be in a leadership position. He told Brockman and Sutskever that they would be able to fire Altman. Both promises could not be kept simultaneously. Amodei chose to leave, the Journal reported.
After leaving, Amodei wrote a memo distinguishing market companies from public good companies, with an ideal split of 75 percent public good and 25 percent market. He later compared OpenAI and rival AI companies to tobacco companies knowingly selling harmful products, according to Livemint. In an internal Slack post reviewed by Fortune, Amodei called OpenAI's messaging "mendacious, safety theater, and an example of who they really are," adding that many of Altman's public comments were "straight up lies and gaslighting," Fortune reported. Altman called one of Anthropic's Super Bowl ads — a four-spot campaign titled "A Time and a Place," featuring the words betrayal, deception, treachery, and violation — "clearly dishonest" and accused the company of doublespeak. Anthropic spent $8 million on the campaign, Fortune reported.
The internal term Anthropic used for OpenAI and its rivals was "healthy alternative" — an internal brand strategy describing competing AI firms as harmful, Livemint reported. It became, somewhat ironically, the label for the one AI company the U.S. government tried to designate as a threat to national security.
Underneath the personal feud is a financial pressure that makes the principle expensive to hold. Both companies are valued at more than $300 billion and are racing toward IPO, according to Livemint. OpenAI reached $25 billion in annualized revenue in early 2026; Anthropic reached $19 billion, WinBuzzer reported. Neither is profitable — and the two companies are not counting the same way. Unlike OpenAI, which reports net revenue after hyperscaler fees, Anthropic reports the full gross amount billed through cloud channels before backing out the hyperscalers' take, a methodological difference that investors have scrutinized. The Pentagon contract is one of the few things that can move the needle on revenue at that scale — Anthropic had an existing $200 million contract with the Pentagon before the supply chain risk dispute arose, according to The New York Times and CNBC.
Anthropic has made concrete choices that show where its line is. It turned down hundreds of millions of dollars in revenue from firms linked to the Chinese Communist Party, Anthropic said in a public statement. It was the first frontier AI company to deploy models on U.S. government classified networks and at National Laboratories, and the first to provide custom models for national security customers — which makes the supply chain risk designation a striking reversal. Amodei told CBS News in early March that "the two sides have much more in common than we have differences," striking a more conciliatory tone than his internal Slack post. The case is ongoing.
OpenAI's agreement sidesteps the question of principle by embedding it in contract. Its red lines are stated, contractual, and enforced by its own safety stack rather than by government policy. Altman got the deal. Amodei got the injunction.
Whether either of them can make the numbers work while keeping their respective positions is the question neither company has answered yet.