Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans
WIRED has obtained the clearest picture yet of how the US military might be using AI chatbots like Anthropic's Claude to generate war plans—and it's raising fresh questions about the Anthropic-Pentagon dispute.

Palantir Demos Show How the Military Could Use AI Chatbots to Generate War Plans
WIRED has obtained the clearest picture yet of how the US military might be using AI chatbots like Anthropic's Claude to generate war plans—and it's raising fresh questions about the Anthropic-Pentagon dispute.
WIRED reviewed Palantir software demos, public documentation, and Pentagon records that together paint the most detailed view to date of how American military officials may be using AI chatbots, including what kinds of queries are being fed to them, the data they use to generate responses, and the kinds of recommendations they give analysts.
The demos show military officials using Claude to sift through large volumes of intelligence and help inform operational decisions. Palantir announced in November 2024 that it would integrate Claude into the software it sells to US intelligence and defense agencies, saying the integration can help analysts uncover "data-driven insights," identify patterns, and support making "informed decisions in time-sensitive situations."
However, Palantir and Anthropic have shared few details about how Claude functions within the military or which Pentagon systems rely on it—even as the AI tool reportedly continues to be used in some US defense operations overseas. According to The Washington Post, Claude played an instrumental role in the US military operation that led to the capture of Venezuelan president Nicolás Maduro in January, and in the war in Iran.
The timing is awkward. The Pentagon has officially designated Anthropic a "supply-chain risk" after the company refused to grant the government unconditional access to its models, insisting that the systems should not be used for mass surveillance of Americans or fully autonomous weapons. Anthropic has filed two lawsuits alleging illegal retaliation by the Trump administration.
The Department of Defense did not respond to a request for comment. Palantir and Anthropic declined to comment.
Sources
- wired.com— WIRED
- washingtonpost.com— The Washington Post
- washingtonpost.com— The Washington Post
Share
Related Articles
Stay in the loop
Get the best frontier systems analysis delivered weekly. No spam, no fluff.
