Two AI giants are fighting the Illinois legislature over who pays when AI kills people — and neither may actually be subject to the rules they're pushing.
OpenAI backs SB 3444, which would shield AI developers from lawsuits over catastrophic harms — 100 or more deaths, or more than a billion dollars in property damage — as long as they did not act intentionally or recklessly and published their own safety plan. Anthropic opposes that bill and instead backs SB 3261, which requires public safety plans, incident reporting, and child protection policies for models tied to harms involving 50 or more deaths. Under SB 3444, the threshold for a "frontier model" — defined as one trained using more than 10^26 FLOPs, a standard measure of training compute, or with training costs exceeding $100 million — neither company's flagship product may qualify. Epoch AI estimates GPT-4's training at roughly 2×10^25 FLOPs: five times below the bill's threshold. Whether Anthropic's Claude models exceed the $100 million compute cost threshold is similarly unclear.
The practical effect of those numbers is what makes the bills worth watching. Under SB 3444, a company that qualifies would receive near-total liability protection in exchange for publishing a safety plan it writes itself, with no independent verification required. Gabriel Weil, a law professor who has studied AI liability frameworks, called the structure "pretty indefensible." "You get near total protection for a very weak requirement," he told Fortune. "I think that's the opposite direction that we should be moving in." Anat Lior, a legal scholar specializing in AI governance, noted that proving a company acted intentionally with respect to a model's harms would be "very hard" — raising the question of whether the bill's liability carve-out would be nearly impossible to break through even in catastrophic cases.
OpenAI's support for SB 3444 aligns with its broader push for liability protections in the U.S., framing regulatory certainty as essential for continued AI investment. Anthropic's opposition — calling the bill a "get-out-of-jail-free card" — reflects its positioning as the safety-conscious lab, one that has publicly argued for stronger pre-deployment evaluation requirements. Governor JB Pritzker, a Democrat, has said he does not believe big tech companies should be given a full shield that lets them evade responsibilities to the public.
The public is skeptical of both corporate positions. Ninety percent of Illinois residents surveyed by Frontier Beat opposed giving AI companies any liability exemption — a finding that cuts across the framing on both sides.
The question neither bill fully addresses is what happens to the AI systems ordinary people use today. The legislation targets frontier models — the largest, most expensive systems — and the catastrophic harms they might cause. Neither bill contains provisions for the more common failures: biased lending decisions, medical triage errors, or content moderation failures that affect millions of people routinely. The fight in Springfield is specifically about who is liable when a very large model does very large harm. The model that denied your loan application is a different problem with a different address.
What to watch next: both bills remain alive in the Illinois General Assembly. Whether either reaches a floor vote this session is uncertain — lobbying from both sides has intensified, and the legislative calendar is compressed. The outcome matters beyond Illinois. Whatever statutory language emerges will almost certainly be cited as model legislation by other states — and potentially by federal lawmakers weighing their own AI liability frameworks.