Google told staff worried about Pentagon AI deals that the company is 'leaning more' into national security contracts
Google Quietly Emerges as Pentagon AI Winner After Rivals Stumble Over Ethics Lines When Anthropic clashed with the Department of Defense last month over demands that it loosen ethical restrictions on its AI technology, a top Google executive was already in the room making the company's case as ...

image from Gemini Imagen 4
When Anthropic clashed with the Department of Defense last month over demands that it loosen ethical restrictions on its AI technology, a top Google executive was already in the room making the company's case as the reliable alternative.
On Feb. 26, Thomas Kurian, the chief executive of Google Cloud, met privately with Emil Michael, the Pentagon official overseeing AI tool selection, according to two people with knowledge of the meeting who spoke on condition of anonymity to discuss private conversations. By the time the Anthropic dispute resolved in the government's favor, Google had expanded its Pentagon footprint — and avoided the political firestorm that consumed its competitors.
The outcome illustrates how Google's return to military work, after famously swearing off defense contracts in 2018 following employee protests over Project Maven, has been defined by a different calculus than its AI rivals. While Anthropic and OpenAI have been battered by the fallout from their Pentagon dealings, Google has crept ahead by offering what one person familiar with the meeting described as reliable technology "without all the noise."
Google's position is structurally advantageous in ways its startup competitors are not. Alphabet posted $34.5 billion in profit last quarter. It manufactures its own AI chips, operates its own cloud infrastructure, and owns the data centers that power it. Unlike money-losing AI labs burning through capital, Google can afford to be the steady, boring partner.
The Pentagon deal announced last week will extend Google's Gemini AI agents to the Defense Department's three million employees on unclassified networks. The GenAI.gov portal will allow civilian and military personnel to build custom AI agents using natural language. It's a meaningful expansion from the company's earlier, more limited role.
The timing reflects the fallout from a months-long dispute between Anthropic and the Pentagon that ended with the company being labeled a "supply chain risk" by Defense Secretary Pete Hegseth. Anthropic had refused to grant the Pentagon unfettered access to its Claude chatbot, maintaining red lines against mass domestic surveillance and fully autonomous weapons systems. CEO Dario Amodei called the demands "inherently contradictory" — labeling the company a security risk while simultaneously demanding its technology as essential to national security.
The Trump administration moved quickly to punish Anthropic's stance. On Feb. 28, President Trump ordered federal agencies to cease using Anthropic technology. OpenAI, which had been negotiating its own Pentagon deal, announced hours later that it had signed a contract for classified military networks — a prize Anthropic had sought.
But OpenAI's win came with complications. Nearly 500 employees at Google and OpenAI signed an open letter urging their companies to "put aside their differences and stand together" with Anthropic. "The Pentagon is trying to divide each company with fear that the other will give in," the letter read. "That strategy only works if none of us know where the others stand."
OpenAI CEO Sam Altman moved quickly to manage the internal fallout, sending a memo to employees emphasizing that the company's deal maintained prohibitions on mass surveillance and autonomous weapons. "We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions," Altman wrote. The company also amended the deal days later after what Altman described as discussions about additional surveillance protections.
Google DeepMind's own position has been characteristically ambiguous. The company has not formally responded to the Anthropic controversy, but Chief Scientist Jeff Dean — speaking as an individual, the company noted — expressed opposition to mass surveillance. "Surveillance systems are prone to misuse for political or discriminatory purposes," Dean wrote on X.
The long view suggests Google's patient, profitable approach may be paying strategic dividends. The company that spent years rebuilding trust after the 2018 Maven episode now finds itself with a growing defense portfolio at the moment its competitors are most vulnerable to criticism from their own workforces. Whether that represents a genuine ethical choice or simply better positioning is a question the company isn't eager to answer directly.

