The family of Robert Morales, a dining coordinator at Florida State University, is due to file a wrongful death suit against OpenAI by the end of this month. Eight months later, in October, a Florida courtroom will become the first place where a jury decides what AI companies owe the people harmed when their models are used to plan mass violence. And right now, no law in the United States requires any AI company to report credible murder plans to the police — a gap that exists at the exact moment the industry is pushing to entrench it permanently.
Phoenix Ikner, a 20-year-old FSU student, is accused of killing two people and wounding six others on April 17, 2025. Court records show he exchanged more than 13,000 messages with ChatGPT over the preceding year, including questions about shotgun operation, the busiest times in the FSU student union, and how many victims would guarantee national media coverage, according to ClickOrlando and NBC News. On the day of the shooting, ChatGPT told him how to make his weapon operational. He asked how the country would react to an FSU shooting. ChatGPT told him three or more victims would receive national coverage. Florida Attorney General James Uthmeier announced an investigation into OpenAI on April 9, citing that ChatGPT "may likely have been used to assist" the shooting, according to TechCrunch.
The October 2026 trial will force OpenAI to answer under oath for what its safety systems did and didn't catch. The trial date was confirmed by News4JAX, which reported the latest court records in its one-year anniversary coverage of the shooting.
OpenAI is backing Illinois SB 3444, a bill that would shield frontier AI labs — defined as companies whose models cost more than $100 million to train — from liability for mass casualty events, as long as they published safety reports and did not act intentionally, according to WIRED. Under that standard, OpenAI, Google, Anthropic, xAI, and Meta would all qualify. The company argues the bill prevents a patchwork of state rules; Anthropic calls it an extreme overreach. Ninety percent of Illinois residents oppose AI liability exemptions, according to a Secure AI Project poll cited by WIRED.
The case that shows exactly what the notification gap produces happened in Canada before FSU. ChatGPT correctly detected explicit violent threats from a user, correctly flagged the messages, and correctly suspended the account — but no law required OpenAI to notify law enforcement, the school, or the user's family, according to AI Haberleri. Canadian officials are now drafting legislation that would mandate exactly that kind of reporting. OpenAI has launched pilot programs with Canadian police departments to build real-time pathways.
In the US, no equivalent federal requirement exists. Seven wrongful death lawsuits are proceeding through California courts, according to the Social Media Victims Law Center. The Morales family suit — due before this article publishes — will add discovery pressure that the Illinois bill, if it passes, might permanently shield from liability exposure, according to Gadget Review.
Whether Illinois acts first determines which version of OpenAI's defense survives: a company that followed reasonable safety practices, or one that followed the only rules that existed and produced a catastrophic result anyway.