OpenAI caught a user describing mass shooting scenarios to ChatGPT in June 2025. Twelve employees wanted to call the police. The company said no.
Eight months later, Jesse Van Rootselaar used the same account — or one like it — to kill eight people in Tumbler Ridge, British Columbia. He then died of a self-inflicted injury. A 12-year-old girl survived three gunshots to the head and neck and will live with permanent cognitive and physical disabilities.
That sequence — warning ignored, second attack, then accountability — is the actual story behind Florida's criminal investigation into OpenAI, which Attorney General James Uthmeier announced Tuesday. The subpoenas demand OpenAI's internal threat policies, safety training materials, and the names of every employee who worked on the product from March 2024 through April 2026. Uthmeier's theory, that an AI company can be criminally liable as an aider and abettor under existing law, is untested. Courts have never held a software company responsible for what a chatbot outputs. But the BC case gives that theory a specific and terrible weight: OpenAI had documented evidence of a specific person planning violence, and the question now is whether that evidence obligated the company to act.
According to the Wall Street Journal, Van Rootselaar described violent gun scenarios to ChatGPT over several days in June 2025. An automated review system flagged the account. Twelve OpenAI employees wanted to alert Canadian law enforcement. The company decided not to. It banned the account and moved on.
Van Rootselaar created a second account. OpenAI discovered it only after the February 10, 2026 shooting.
The company did not respond to questions about whether it changed any policies between June 2025 and February 2026, or whether it has a protocol for notifying law enforcement when its systems flag a user planning violence. A spokesperson pointed to prior statements that ChatGPT provides factual information available broadly on the internet and does not encourage harmful activity.
BC Premier David Eby was direct. "OpenAI had the opportunity to notify authorities and potentially even to stop this tragedy from happening," he told reporters in March, after meeting virtually with OpenAI CEO Sam Altman and the mayor of Tumbler Ridge. Altman said he would apologize.
The Florida subpoenas suggest prosecutors are trying to establish what OpenAI's systems were designed to do — and whether the company knew its product could be weaponized and chose not to fix it. "We are going to look at who knew what, designed what, or should have done what," Uthmeier said. "And if it is clear that individuals knew that this type of dangerous behavior might take place and nevertheless still turned to profit, still allowed this business to operate, then people need to be held accountable."
OpenAI is already facing a civil lawsuit in BC, filed by the family of Maya Gebala, the 12-year-old shot three times at close range. She has catastrophic traumatic brain injury that will leave her with permanent cognitive and physical disabilities.
The Florida probe also lands against the backdrop of the April 2025 Florida State University shooting, which killed two and injured five. Court records show the accused, Phoenix Ikner, 21, asked ChatGPT in the minutes before the attack what time the student union was busiest, what type of gun to use, what ammunition went with it, and how the country would react to a shooting at FSU. More than 200 AI messages are in evidence. His trial is set for October 19. Florida's criminal case is the first of its kind targeting an AI company — but it is the BC failure, not the FSU chat logs, that defines what prosecutors are actually asking courts to decide.
Whether existing law can account for AI-assisted planning is the open question. Florida's aiding and abetting statute requires intent and action; a chatbot providing factual information may not meet that threshold. OpenAI will argue it had no way to foresee the specific attack. The investigation could produce nothing actionable.
But the question the company cannot escape is the one Eby asked: OpenAI had the chance to try. Twelve employees wanted to. The company said no. Eight people are dead.