When Senator Elissa Slotkin introduced the AI Guardrails Act last week, she described it as the most basic question in AI policy: who gets to decide when a machine takes a human life, and whether the government can use artificial intelligence to surveil Americans on its own soil. The bill she filed on March 17 is five pages and has no cosponsors. It is not subtle.
The legislation would do three concrete things, according to a press release from Slotkin's office: ban the Department of Defense from using AI to fire autonomous weapons without a human in the loop, prohibit DoD from using AI to conduct domestic mass surveillance, and ensure the decision to launch nuclear weapons rests solely with the Commander-in-Chief. A separate effort from Senator Adam Schiff of California, still in drafting, takes a similar approach to the same set of problems. The two senators are not coordinating, as far as anyone has said publicly. But they are reading the same news.
What prompted this bipartisan-ish legislative movement is a real-time rupture between the federal government and Anthropic, the AI safety company behind Claude. On February 27, President Trump ordered all federal agencies to cease all use of Anthropic technology. Defense Secretary Pete Hegseth designated the company a supply chain risk the same day, posting on social media that Americas warfighters would never be held hostage by the ideological whims of Big Tech. The decision, Hegseth said, was final.
The trigger was a contract negotiation that collapsed. Anthropic had signed a $200 million contract with the Pentagon in July 2025 and became the first AI lab to deploy its technology on the agency classified networks, according to CNBC. But the company refused to sign contract language that would have allowed its safety safeguards to be overridden at will. New language framed as a compromise was paired with legalese that would have permitted those safeguards to be disregarded, Anthropic told ABC News. The safeguards in question were Anthropics public commitments: not deploying Claude for mass surveillance of Americans, and not allowing fully autonomous weapons systems to use the technology. Anthropic said the contract language it received made virtually no progress on either front.
What followed was unusual by any measure. Anthropic filed suit against the federal government on March 9 in a California federal court. Nearly 150 retired federal and state judges filed an amicus brief supporting the company on March 17. And on March 24, Judge Rita F. Lin of the U.S. District Court for the Northern District of California heard Anthropics request for a preliminary injunction and appeared skeptical of the governments position. The Pentagons standard for supply chain risk designation, Lin said, seems like a pretty low bar. A DOJ lawyer, pressed on what exactly the risk was, said the government worried that Anthropic might in the future install a kill switch or functionality that changes how it functions. That, in the governments view, was unacceptable.
There is a real definitional problem hiding in this dispute that neither side has fully resolved. Anthropic has drawn a clear line: no AI that enables machines to kill without a human in the decision chain, no AI that enables mass domestic surveillance. The Pentagon, under current leadership, seems to believe those commitments are negotiable at the contract level. Schiff, asked about relying on an AI CEOs word versus statutory requirements, said he would have a lot more confidence if these were statutory requirements, than relying on the lawfulness of the Pentagon or the word of an AI CEO.
Slotkin was blunter. The Pentagon was able to target Anthropic in this case, she told NBC News, and is going to spend the next year and God knows how many millions of dollars ripping out Anthropic from all the classified systems over a dispute that could have been handled if we just had law.
Whether these bills go anywhere is an open question. The AI Guardrails Act has no cosponsors as of this writing. The Senate is not known for moving quickly on anything, let alone legislation that requires defining what an autonomous weapon actually is, a question that has resisted international consensus for years. The Definition of lethal autonomous weapon is genuinely contested. Anthropics position and the Department of Defenses definition may not be the same thing. As Lawfare has noted, what counts as human-on-the-loop versus human-in-the-loop, and whether an operator must actively authorize each individual strike or merely supervise a running system, are questions that militaries, ethicists, and AI researchers have debated without resolution.
The counterargument to legislation like this is predictable: move fast, let the Pentagon figure it out, statutory constraints on a battlefield are a luxury. Hegseths framing that warfighters will never be held hostage by the ideological whims of Big Tech plays well in that register. The counter-counterargument is that Anthropic was not holding anything hostage. It was refusing to build a product that violated its own stated principles. The company said it had told the Pentagon before signing the July contract that it would not remove those commitments, and signed anyway, believing a contract would be respected. The DOJ lawyer at the March 24 hearing appeared to acknowledge that the government knew exactly what Anthropic would and would not agree to before the contract was signed.
What is worth watching next: the preliminary injunction hearing is ongoing and Judge Lin is clearly troubled by the governments reasoning. If she grants the injunction, the blacklisting order pauses pending the full lawsuit, and the pressure on Congress to legislate rather than litigate eases. If she does not, the case proceeds on a longer timeline and the legislative track becomes more urgent. Either way, the question Slotkin put on the table is not going away. Who decides what a machine is allowed to do to a person is not a question that contract negotiations resolve.
Schiff put it plainly: whenever a technology has the capability of taking a human life, there needs to be a human operator in the chain of command. We dont want to delegate that kind of responsibility over life and death to an algorithm. He is right that this should not be an open question. It is not clear that anyone in government currently has an answer.