OpenAI Could Have Stopped Mass Shooting, 7 New Lawsuits Allege
ChatGPT’s parent company, OpenAI, could have stopped the mass shooting which left eight people dead in Canada earlier this year, seven new lawsuits allege.
Jesse Van Rootselaar, an 18 man who “identified” as a woman, killed two adults and six children and injured several others in Tumbler Ridge, British Columbia on February 10 before taking his own life.
The families of seven of his victims sued OpenAI in federal court Wednesday, alleging the company knew Van Rootselaar posed a danger to Tumbler Ridge in June 2025, when ChatGPT’s safety team banned him from the platform, but chose not to report him to law enforcement.
“[OpenAI] did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk,” the New York Post quoted the filings.
Altman confirmed this troubling allegation in a public apology to Tumbler Ridge published last week. The OpenAI CEO apologized for failing to tell Canadian police about Van Rootselaar after ChatGPT’s safety team banned him for talking to the chatbot about hurting others.
“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman concluded.
But, according to victims’ families, preventing the Tumbler Ridge shooting should have been easy and obvious — starting by listening ChatGPT’s safety team.
According to the families’ lawyer, Jay Edelson, no less than 12 employees encouraged OpenAI to warn police about Van Rootselaar. OpenAI resisted this internal pressure, the lawsuits speculate, because it would set a dangerous and expensive precedent “compelling [the company] to notify authorities every time its safety team identified a user planning real-world violence.”
In turn, OpenAI would be forced to admit, “ChatGPT is not the safe, essential tool the company sell sit as, but a product dangerous enough that its makers routinely identify its users as threats.”
Keeping banned users from creating new accounts may also prevent violence. ChatGPT doesn’t just lack ways to enforce bans, the lawsuits claim, it sends banned users emails showing them how to set up new accounts with different email addresses.
It’s unclear whether Van Rootselarr created a new ChatGPT account after he was banned or if the families will use this point to emphasize OpenAI’s negligence.
It’s also unclear what, exactly, OpenAI knew about Van Rootselaar’s plans. Did his ChatGPT logs suggest he was a disturbed person — or did he use the chatbot to plan his assault on Tumbler Creek?
One family’s suit, per the BBC, “allege[s] that OpenAI ‘had actual knowledge’ of the shooter’s intention to carry out an attack through conversations with ChatGPT, where the shooter described ‘scenarios involving gun violence.’”
A portion of the Post’s coverage reads:
Edelson told the Post OpenAI has refused to turn over Van Rootselaar’s chatlogs with ChatGPT.
The suits, and dozens more predicted to follow, could impact Florida’s criminal investigation into OpenAI, which Attorney General James Uthmeier launched last week.
The investigation will determine whether ChatGPT “aided and abetted” shooting outside Florida State University in April 2025 and the murder of two doctoral students from the University of South Florida earlier this month.
As the Daily Citizen previously reported, Uthmeier’s investigation seems aimed at determining what kinds of data OpenAI collects on its users, whether humans ever lay eyes on this data and whether the company has procedures for reporting crimes to the police.
Van Rootselaar’s case confirms ChatGPT has a human safety team which can recommend OpenAI report users to the police. It also confirms that OpenAI sometimes vetoes those recommendations.
So how often does OpenAI flout its safety team’s recommendations?
Could OpenAI have alerted police before other major crimes?
What triggers a review from the safety team?
How do teens like Adam Raine slip through the cracks?
It’s long past time OpenAI answer hard questions like these. If it’s under oath? All the better.
Additional Articles and Resources
Counseling Consultation & Referrals
Parenting Tips for Guiding Your Kids in the Digital Age
You Don’t Need ChatGPT to Raise a Child. You Need a Mom and Dad.
Florida Expands Criminal Investigation into ChatGPT
Florida Sues OpenAI for Allegedly Aiding FSU Shooter
The 5 Most Important Things New Lawsuits Reveal About ChatGPT-4o
AI Company Releases Sexually Explicit Chatbot on App Rated Appropriate for 12 Year Olds
Man Takes His Life After Forming Romantic Relationship with AI, Lawsuit Alleges
AI Chatbots Make It Easy for Users to Form Unhealthy Attachments
X’s ‘Grok’ Generates Pornographic Images of Real People on Demand
Seven New Lawsuits Against ChatGPT Parent Company Highlights Disturbing Trends
ChatGPT Parent Company Allegedly Dismantled Safety Protocols Before Teen’s Death
AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege
ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege
AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds
AI Chatbots Make It Easy for Users to Form Unhealthy Attachments
ABOUT THE AUTHOR

Emily Washburn is a staff reporter for Daily Citizen at Focus on the Family and regularly writes stories about politics and noteworthy people. She previously served as a staff reporter for Forbes Magazine, editorial assistant, and contributor for Discourse Magazine and Editor-in-Chief of the newspaper at Westmont College, where she studied communications and political science. Emily has never visited a beach she hasn’t swam at, and is happiest reading a book somewhere tropical.
Related Posts

Fertility Decline is Widespread Across Education Levels
April 30, 2026

Please Keep Praying for Erika Kirk
April 30, 2026

Pope Leo Gets It Very Wrong on Sexuality
April 28, 2026
