Anthropic-Pentagon Battle Shows How Big Tech Has Reversed Course on AI and War
The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war—and what lines it will not

Anthropic-Pentagon Battle Shows How Big Tech Has Reversed Course on AI and War
The standoff between Anthropic and the Pentagon has forced the tech industry to once again grapple with the question of how its products are used for war—and what lines it will not cross. Amid Silicon Valley's rightward shift under Donald Trump and the signing of lucrative defense contracts, big tech's answer is looking very different than it did even less than a decade ago.
Anthropic's feud with the Trump administration escalated three days ago as the AI firm sued the Department of Defense, claiming that the government's decision to blacklist it from government work violated its first amendment rights. The company and the Pentagon have been locked in a months-long standoff, with Anthropic attempting to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons.
Anthropic has argued that giving in to the DoD's demands to permit "any lawful use" of its technology would violate its founding safety principles and open up its technology for potential abuse, staking an ethical boundary that others in the industry must decide whether they want to cross.
Although Anthropic's refusal to remove safety guardrails and the Pentagon's subsequent retaliation have highlighted longstanding concerns over the use of AI for conflict, the fight has shown how much the goal posts have moved when it comes to big tech's ties to the military.
"If people are looking for good guys and bad guys, where a good guy is someone who doesn't support war," said Margaret Mitchell, an AI researcher and chief ethics scientist at the tech firm Hugging Face. "Then they're not going to find that here."
There's a number of contributing factors in big tech's newfound embrace of militarism. Its alignment with the Trump administration, which has included shows of fealty from major CEOs, has tied tech firms to the government's desire to expand its military capabilities.
It was not so long ago, however, that working with the military on potentially harmful technology was seen as a red line for many big tech workers. In 2018, thousands of Google employees launched a protest against a program to analyze drone footage for the DoD called Project Maven. "We believe that Google should not be in the business of war," over 3,000 workers stated in an open letter at the time. Google decided not to renew Project Maven following the protests.
In the years since the Project Maven protest, though, Google has clamped down on employee activism, removed the 2018 language from its policies that prohibited creating technology for weaponry, and signed numerous contracts that allow militaries to use its products. In 2024, the tech giant fired over 50 employees in response to protests against the company's military ties to the Israeli government, as The Guardian reported.
Google announced just this week that it would provide its Gemini artificial intelligence to the military as a platform for creating AI agents to work on unclassified projects.
OpenAI too had a blanket ban on allowing any militaries to access its models prior to 2024. According to DefenseScoop, its chief product officer Kevin Weil now serves as a lieutenant colonel in the US Army Reserve's "Executive Innovation Corps." And according to CNBC, OpenAI, along with Google, Anthropic and xAI, signed contracts worth up to $200 million with the DoD last year to integrate AI technology into military systems.
Sources
- theguardian.com— The Guardian
- theguardian.com— The Guardian
- defensescoop.com— DefenseScoop
- cnbc.com— CNBC
Share
Related Articles
Stay in the loop
Get the best frontier systems analysis delivered weekly. No spam, no fluff.
