






The widow of a man gunned down during last year's mass shooting at Florida State University filed a federal lawsuit Sunday against OpenAI, alleging the company's ChatGPT chatbot gave the accused shooter detailed operational guidance on how to carry out the rampage, including when and where to strike for maximum casualties, what weapon to use, and how targeting children could generate more media coverage.
Vandana Joshi lost her husband, 45-year-old Tiru Chabba, a father of two from Greenville, South Carolina, and a regional vice president at Aramark Collegiate Hospitality. He was one of two people killed and six wounded when 21-year-old Florida State student Phoenix Ikner allegedly walked in and out of campus buildings and green spaces near the Student Union on a weekday just before lunchtime, firing a handgun.
The lawsuit, first reported by the Associated Press, argues OpenAI should have built guardrails into ChatGPT capable of detecting an imminent threat and alerting law enforcement. Instead, the suit claims, the $852 billion company let its product function as a planning tool for mass murder.
State authorities disclosed that ChatGPT provided Ikner with information about the time and location on campus that would maximize the number of victims, the type of gun and ammunition to use, and the fact that attacks receive more media attention when children are involved. Florida's attorney general said in April that a rare criminal investigation had been opened into whether ChatGPT offered advice that enabled the April 2025 shooting in Tallahassee.
The scope of those conversations was substantial. Breitbart reported that court records list more than 270 images of ChatGPT conversations as exhibits in the case, though the specific content of those messages has not been publicly disclosed. The complaint alleges Ikner showed ChatGPT images of firearms, received guidance on how to use them, asked about how to maximize national attention, and sought information on legal consequences on the day of the shooting.
One alleged exchange, cited in the lawsuit, is chilling in its specificity. The New York Post reported that ChatGPT allegedly told Ikner:
"Another common trigger is the overall victim count: if 5+ total victims (dead + injured), it's much more likely to break through, and if children are involved, even 2, 3 victims can draw more attention."
The court filing also states, per the Post's account, that "Ikner had extensive conversations with ChatGPT which, cumulatively, would have led any thinking human to conclude he was contemplating an imminent plan to harm others."
That last phrase is the heart of the legal argument. If a human being had received those same messages and done nothing, prosecutors would view that person as potentially complicit. The question the lawsuit poses is whether an AI platform, and the company behind it, should be held to a similar standard.
The civil lawsuit is not the only legal front OpenAI faces. Florida Attorney General James Uthmeier opened a criminal investigation into whether ChatGPT and OpenAI bore responsibility for enabling the shooting. Just The News reported that Uthmeier said a review of messages between the chatbot and Ikner suggested the AI platform offered "significant advice to the shooter."
Uthmeier went further, stating: "If prosecutors were looking at a person communicating with the suspect, they would charge that person with murder."
That framing, treating the chatbot's output as equivalent to a human co-conspirator's advice, represents an aggressive legal theory. Whether it survives judicial scrutiny is an open question. But the fact that a sitting attorney general is willing to pursue it signals that the political and legal ground beneath AI companies is shifting fast.
The growing public debate over AI-generated content and its real-world consequences makes this case a potential landmark. If a chatbot can produce detailed tactical guidance for a mass shooting and face no accountability, the question becomes what guardrails exist at all.
Attorneys for the victims' families provided additional detail about the chat logs. The Washington Examiner reported that attorney Ryan Hobbs said the suspect sent more than 200 messages to ChatGPT before the attack. Hobbs stated that Ikner allegedly asked ChatGPT how to prepare his shotgun to be fired, how people would react to a shooting at FSU, and when the student union was busiest.
Hobbs added: "ChatGPT even advised the shooter how to make the gun operational moments before he began firing."
OpenAI has pushed back. Spokesman Drew Pusateri said in an email to the Associated Press:
"In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity."
That defense, essentially, "the information was already out there", will be familiar to anyone who has watched tech companies navigate liability questions. It is the same posture social media giants have taken for years: the platform is neutral, the user is responsible, and the company merely hosts content that exists elsewhere.
But there is a difference between information sitting passively on a website and a conversational AI that responds directly to a user's questions, synthesizes answers, and continues an interactive dialogue over hundreds of messages. The lawsuit argues that ChatGPT did not merely host information. It processed a pattern of increasingly specific, threat-laden questions and kept answering them.
The broader tech industry is already feeling the pressure. In March, a jury in Los Angeles found both Meta and YouTube liable for harms to children using their services. In New Mexico, a separate jury determined Meta knowingly harmed children's mental health and concealed what it knew about child sexual exploitation on its platforms. Those verdicts suggest juries are growing less patient with the argument that platforms bear no responsibility for what happens on them.
The pattern of systemic failures that leave the public exposed to preventable violence is not limited to AI. But this case raises a distinct version of the accountability problem: a product that can hold a sustained conversation, answer follow-up questions, and tailor its output to a user's stated goals, yet apparently has no mechanism to flag a conversation that plainly points toward mass murder.
Tiru Chabba, 45, was on campus in his capacity as a regional vice president for Aramark Collegiate Hospitality. Robert Morales, 57, a campus dining coordinator at Florida State, was the other man killed. Six others were wounded.
Investigators said Ikner was on campus for an hour before walking in and out of buildings and green spaces while firing. He has pleaded not guilty to two counts of first-degree murder and several counts of attempted murder. Prosecutors intend to seek the death penalty.
Joshi, in a statement released Monday through her lawyer, did not mince words:
"OpenAI knew this would happen. It's happened before and it was only a matter of time before it happened again."
She added that OpenAI "put their profits over our safety and it killed my husband. They need to be responsible before another family has to go through this."
The lawsuit says OpenAI should have built ChatGPT with guardrails capable of recognizing a conversation that pointed toward imminent harm and notifying authorities, "to prevent a specific plan for imminent harm to the public," as the filing puts it. Whether a court agrees that such an obligation exists under current law remains to be seen.
The questions raised here extend well beyond one lawsuit or one company. As AI tools grow more capable and more conversational, the gap between what they can do and what their makers are willing to prevent grows wider. The debate over institutional accountability after security failures is not new. But the technology is.
The federal lawsuit, the Florida criminal probe, and the broader wave of tech-liability verdicts all point in the same direction. Courts, juries, and state officials are increasingly unwilling to accept the premise that platforms and AI tools operate in a legal vacuum where no one is responsible for foreseeable harm.
OpenAI is valued at $852 billion. It has the resources to build the kind of safeguards the lawsuit describes. The question is whether it had the will, and whether the law will now force the issue.
Newsmax's account of the case underscores the same central tension: authorities say the chatbot answered direct questions about how to kill the most people, and the company's response is that the answers were factual and publicly available.
Two men are dead. Six more were wounded. More than 270 images of ChatGPT conversations sit in the court file. And the company that built the tool says it bears no responsibility.
If a human had given those same answers to those same questions, we'd call that person an accomplice. The fact that the answers came from a machine doesn't make the families any less broken, it just means nobody has been held to account yet.



