For more than a year, Phoenix Ikner sent ChatGPT roughly 35 queries a day. OpenAI banned his account at least once. It told nobody. Six months later, nine people were dead on the Florida State University campus.
That pattern is what Florida Attorney General James Uthmeier is now pursuing as a criminal matter. On Tuesday his office issued subpoenas demanding OpenAI produce its internal policies on user threats of harm, training materials for ChatGPT's safety systems, an organizational chart of company leadership, and a list of every employee who worked on ChatGPT from March 1, 2024 through April 17, 2026. The scope of the demands signals prosecutors are building a structural case against the company itself, not simply citing individual bad outputs. NBC News
"If this were a person on the other side of the screen, we would be charging them with murder," Uthmeier said at a press conference. "We cannot have AI bots that are advising others on how to kill others." NBC News
OpenAI disputes the characterization. "ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity," said spokesperson Kate Waters. NBC News
Ikner, 21, is accused of opening fire on the FSU campus last April, killing Robert Morales, a 57-year-old football coach and university dining program manager, and Tiru Chabba, a 45-year-old father of two from South Carolina. Seven others were injured. Court documents show he exchanged more than 13,000 messages with ChatGPT over more than a year before the attack. In the minutes before the shooting he asked the chatbot what time the student union was busiest and how the country would react to an FSU shooting. His trial is set for October 19. NBC News News4Jax NPR
What OpenAI's safety rhetoric looks like in practice: the company banned a user for violent chat behavior in June 2025 and never notified police. That user, Jesse Van Rootselaar, is accused of killing eight people and injuring 18 in a mass shooting in Tumbler Ridge, British Columbia, in February, including 12-year-old Maya Gebala, who was shot three times and suffered catastrophic brain damage. The shooter in that case was under 18 when he began using ChatGPT; the lawsuit alleges OpenAI performed no age verification. OpenAI has not commented on the Canadian lawsuit. CBC News
The combination of the two cases is what makes the Florida prosecutor's case novel. OpenAI had roughly six months of warning between its own ban in Canada and the FSU shooting. The company did not alert law enforcement in either case. The subpoenas now seek to understand what OpenAI knew, what its policies required, and whether those policies were adequate.
The legal theory is untested. Proving a software company is criminally liable for what a chatbot outputs is a high bar. OpenAI will argue it cannot be held responsible for users who act on publicly available information, and courts have historically been reluctant to impose negligence liability on software for third-party acts. More than 200 AI messages have been entered into evidence in the case. NPR Whether existing law can account for AI-assisted planning in the age of large language models is what the court will have to decide.