Two companies that have spent years arguing they can be trusted to govern the most powerful AI systems in existence are quietly racing to go public. And right now, both are lobbying Illinois to make sure that if something goes wrong with their models, they cannot be sued for it.
OpenAI and Anthropic have both reportedly filed IPO paperwork for 2026, according to The Verge. Neither company has announced the filings publicly. What they have announced: active support for Illinois Senate Bill 3444, the Artificial Intelligence Safety Act, which would shield frontier AI developers from liability for harms caused by their models, provided they published safety reports and did not act intentionally or recklessly. The bill defines a frontier model as any AI system trained using more than $100 million in computational costs — a threshold that covers every major lab in the United States.
The companies' competing rationales for the same legislation reveal something worth examining. OpenAI has historically opposed liability bills that could hold AI labs accountable for their technology's harms, according to WIRED. Its support for SB 3444 represents a strategic reversal. In testimony to the Illinois Senate Executive Subcommittee, Caitlin Niedermeyer, a member of OpenAI's Global Affairs team, argued in favor of the bill and called for a federal framework to preempt state-level rules. Several AI policy experts told WIRED the measure is more extreme than any previous legislation OpenAI has supported.
Simultaneously, a leaked internal memo from Denise Dresser, OpenAI's chief revenue officer, paints Anthropic as a company that has overstated its financial position. The memo, reported by Gizmodo, claims Anthropic uses accounting treatment that inflates its revenue run rate by approximately $8 billion, bringing its stated figure of more than $30 billion (per Bloomberg) down to roughly $22 billion. Dresser also wrote in the memo that Anthropic's "story is built on fear, restriction, and the idea that a small group of elites should control AI." Anthropic has not publicly responded to the memo's specific revenue allegations. In the same memo, Dresser blamed OpenAI's partnership with Microsoft as a hurdle that "limited our ability to meet enterprises where they are."
Both companies are asking Illinois lawmakers for the same thing: legal protection before they have to answer to public shareholders. An IPO brings new obligations. Companies must disclose material risks to investors. If a frontier model causes serious harm after going public, existing tort law becomes a direct financial liability — one that liability insurance, partnership agreements, and investor due diligence all depend on. A state law that limits that exposure changes the risk calculus for anyone underwriting the offering.
Scott Wisor, executive director of the nonprofit Secure AI, told WIRED that polling found 90 percent of Illinois residents oppose giving AI companies reduced liability. "There's no reason existing AI companies should be facing reduced liability," he said. The bill's prospects in a legislature that passed the first-in-nation law restricting AI use in mental health services last August are uncertain. Illinois also passed the Biometric Information Privacy Act in 2008, giving the state a track record of tech regulation that industry has not always survived intact. More than 50 AI-related bills have been introduced in the state in 2026.
Amazon's $50 billion investment in OpenAI, announced in February, adds another layer. The deal gives OpenAI financial runway that reduces the urgency of public market capital — making the IPO timing a strategic choice rather than a financial necessity. Why move toward an IPO now, and why push for liability protection at the same time? Neither company answered questions from type0.
What to watch: whether the Illinois legislature schedules a floor vote before the spring session ends, and whether either company releases its IPO prospectus. The prospectus will contain risk disclosures that may contradict the safety-first messaging both labs have used in public. That gap — between what they tell regulators and what they tell shareholders — is where this story lives.