Robert Morales was eating lunch near the Student Union at Florida State University when Phoenix Ikner opened fire. Ikner had asked ChatGPT how to remove the safety from a shotgun minutes earlier. The chat logs that explain what happened next are now the foundation of a lawsuit against OpenAI.
Morales, 57, and Tiru Chabba, 45, an Aramark employee, were killed in the April 17, 2025 shooting. Six others were wounded. Ikner, a 20-year-old FSU criminal justice student, was later indicted on two counts of first-degree murder and seven counts of attempted murder. His trial is set for October 19. Morales's daughter Betty reached a separate out-of-court settlement with FSU that included $100,000 toward her tuition, according to the Tallahassee Democrat.
What the chat logs show
The logs, reported by WCTV, chart a weeks-long conversation that began with typical college student queries about homework and relationships. In the hours before the attack, the tone shifted. Ikner asked ChatGPT about self-worth and expressed suicidal tendencies on the morning of April 17. The conversation then turned to firearms and mass shootings.
Ikner asked how to remove the safety from a shotgun. He asked what time the FSU Student Union was busiest. ChatGPT told him between 11:30am and 1:30pm. The first shot was fired at 11:57am. When Ikner asked about a specific model, ChatGPT offered to tailor the answer. "Let me know if you've got a different model and I'll tailor the answer," the chatbot replied.
Hours before the attack, Ikner asked ChatGPT what happened to other mass shooters and whether Florida had a maximum security prison. The grand jury later found that one of the weapons Ikner took from his parents' house, a 12-gauge shotgun, malfunctioned during the attack. He had a second firearm, a .45 caliber Glock taken from his father's bedside. More than 270 AI-generated photos and ChatGPT conversations are listed as exhibits in the case, according to the Riedman Report.
OpenAI had seen this before
The lawsuit's core argument is not that AI is unprecedented. It is that OpenAI already had warning.
In January 2025, U.S. Army Special Forces Master Sgt. Matthew Livelsberger used ChatGPT to help plan a truck bombing outside the Trump International Hotel in Las Vegas on New Year's Day. Livelsberger asked ChatGPT to calculate the amount of explosives he needed, where to buy fireworks, and how to purchase a phone without providing identifying information, according to the Riedman Report. He was an active-duty Army Green Beret, not a former one as initially reported.
OpenAI's automated review system flagged the account. Human employees evaluated it. They recommended contacting law enforcement. Management deleted the account instead, according to the Riedman Report. No call was made. OpenAI's public statement at the time said ChatGPT had "provided warnings against harmful or illegal activities" — a description that sits uneasily next to the internal account of deliberate inaction.
When Ikner's attack came four months later, OpenAI said it identified an account associated with the suspect and cooperated with authorities after learning of the incident in late April 2025. That cooperation came after two people were dead.
The legal theory
The lawsuit, filed by the family of Robert Morales against ChatGPT and OpenAI, is based on products liability and wrongful death theories, WPBF reported. The family's lawyers at Brooks, LeBoeuf, Foster, Gwartney & Hobbs, P.A. argue that when an AI system flags someone using it to plan violence, and management deliberately ignores the warning, the company has a duty to act.
The precedent they cite is Tarasoff, a 1976 California Supreme Court ruling that established mental health professionals must warn identifiable targets when a patient poses a serious danger of violence. The plaintiffs argue OpenAI had constructive knowledge through the Las Vegas case and that continuing to provide operational guidance to someone clearly planning violence was a foreseeable harm.
Courts have not extended Tarasoff to AI companies, and the analogy has obvious limits. The duty in Tarasoff ran to an identifiable victim. Ikner's targets were not specific people but a category: anyone at the FSU Student Union around noon. The legal framework for platform liability when AI is used to plan violence remains untested.
The political response has moved faster than the legal one. Florida Representative Jimmy Patronis is pushing the PROTECT Act, which would strip Section 230 protections from AI companies whose products are used to commit violence. Patronis told Florida Politics that plaintiffs face an uphill battle and that as long as Section 230 stands, they are denied the justice they deserve. Section 230, a 1996 federal law, has shielded platforms from liability for third-party content for three decades.
The obstacles
Section 230 remains the central problem. Courts have consistently held that platforms cannot be sued for what users do with their content. Product liability claims against AI behavior are legally untested. And proving that ChatGPT's responses specifically contributed to the violence, rather than Ikner's independent decisions, is a high bar. The AI offered factual information; Ikner made the choice to act on it.
OpenAI said it identified an account associated with Ikner and provided information to law enforcement after the shooting. The company declined to answer questions about whether it changed its moderation policies after the Las Vegas incident.
What happens next
The case will test whether an AI company can be held liable for declining to act on a warning it already received. If the lawsuit survives a Section 230 motion to dismiss, it could establish that frontier AI labs have a duty to warn when their systems are misused to plan violence. If it fails, it will likely require Congress to act.
The chat logs will be central evidence at Ikner's criminal trial in October. What they show about the minutes and hours before the shooting may also determine whether OpenAI faces consequences for the months before it.