• Skip to main content
Daily Citizen
  • Subscribe
  • Categories
    • Culture
    • Life
    • Religious Freedom
    • Sexuality
  • Parenting Resources
    • LGBT Pride
    • Homosexuality
    • Sexuality/Marriage
    • Transgender
  • About
    • Contributors
    • Contact
  • Donate

ChatGPT

May 01 2026

OpenAI Could Have Stopped Mass Shooting, 7 New Lawsuits Allege

ChatGPT’s parent company, OpenAI, could have stopped the mass shooting which left eight people dead in Canada earlier this year, seven new lawsuits allege.

Jesse Van Rootselaar, an 18 man who “identified” as a woman, killed two adults and six children and injured several others in Tumbler Ridge, British Columbia on February 10 before taking his own life.

The families of seven of his victims sued OpenAI in federal court Wednesday, alleging the company knew Van Rootselaar posed a danger to Tumbler Ridge in June 2025, when ChatGPT’s safety team banned him from the platform, but chose not to report him to law enforcement.  

“[OpenAI] did the math and decided that the safety of the children of Tumbler Ridge was an acceptable risk,” the New York Post quoted the filings.

Altman confirmed this troubling allegation in a public apology to Tumbler Ridge published last week. The OpenAI CEO apologized for failing to tell Canadian police about Van Rootselaar after ChatGPT’s safety team banned him for talking to the chatbot about hurting others.

“Going forward, our focus will continue to be on working with all levels of government to help ensure something like this never happens again,” Altman concluded.

But, according to victims’ families, preventing the Tumbler Ridge shooting should have been easy and obvious — starting by listening ChatGPT’s safety team.

According to the families’ lawyer, Jay Edelson, no less than 12 employees encouraged OpenAI to warn police about Van Rootselaar. OpenAI resisted this internal pressure, the lawsuits speculate, because it would set a dangerous and expensive precedent “compelling [the company] to notify authorities every time its safety team identified a user planning real-world violence.”

In turn, OpenAI would be forced to admit, “ChatGPT is not the safe, essential tool the company sell sit as, but a product dangerous enough that its makers routinely identify its users as threats.”

Keeping banned users from creating new accounts may also prevent violence. ChatGPT doesn’t just lack ways to enforce bans, the lawsuits claim, it sends banned users emails showing them how to set up new accounts with different email addresses.

It’s unclear whether Van Rootselarr created a new ChatGPT account after he was banned or if the families will use this point to emphasize OpenAI’s negligence.

It’s also unclear what, exactly, OpenAI knew about Van Rootselaar’s plans. Did his ChatGPT logs suggest he was a disturbed person — or did he use the chatbot to plan his assault on Tumbler Creek?

One family’s suit, per the BBC, “allege[s] that OpenAI ‘had actual knowledge’ of the shooter’s intention to carry out an attack through conversations with ChatGPT, where the shooter described ‘scenarios involving gun violence.’”

A portion of the Post’s coverage reads:

There were no safeguards in place to stop Van Rootselaar from setting up a new account and carrying on with the evil plan under a different name.

Edelson told the Post OpenAI has refused to turn over Van Rootselaar’s chatlogs with ChatGPT.

The suits, and dozens more predicted to follow, could impact Florida’s criminal investigation into OpenAI, which Attorney General James Uthmeier launched last week.

The investigation will determine whether ChatGPT “aided and abetted” shooting outside Florida State University in April 2025 and the murder of two doctoral students from the University of South Florida earlier this month.

As the Daily Citizen previously reported, Uthmeier’s investigation seems aimed at determining what kinds of data OpenAI collects on its users, whether humans ever lay eyes on this data and whether the company has procedures for reporting crimes to the police.

Van Rootselaar’s case confirms ChatGPT has a human safety team which can recommend OpenAI report users to the police. It also confirms that OpenAI sometimes vetoes those recommendations.

So how often does OpenAI flout its safety team’s recommendations?

Could OpenAI have alerted police before other major crimes?

What triggers a review from the safety team?

How do teens like Adam Raine slip through the cracks?

It’s long past time OpenAI answer hard questions like these. If it’s under oath? All the better.

Additional Articles and Resources

Counseling Consultation & Referrals

Parenting Tips for Guiding Your Kids in the Digital Age

You Don’t Need ChatGPT to Raise a Child. You Need a Mom and Dad.

Florida Expands Criminal Investigation into ChatGPT

Florida Sues OpenAI for Allegedly Aiding FSU Shooter

The 5 Most Important Things New Lawsuits Reveal About ChatGPT-4o

AI Company Releases Sexually Explicit Chatbot on App Rated Appropriate for 12 Year Olds

Man Takes His Life After Forming Romantic Relationship with AI, Lawsuit Alleges

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

X’s ‘Grok’ Generates Pornographic Images of Real People on Demand

Seven New Lawsuits Against ChatGPT Parent Company Highlights Disturbing Trends

ChatGPT Parent Company Allegedly Dismantled Safety Protocols Before Teen’s Death

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

Written by Emily Washburn · Categorized: Culture · Tagged: AI, ChatGPT

Apr 28 2026

Florida Expands Criminal Investigation into ChatGPT

Florida expanded its criminal investigation into ChatGPT and its parent company, OpenAI, Monday after investigators discovered the man charged with murdering two University of South Florida (USF) students used the chatbot.

Police arrested former USF student Hisham Abugharbieh Friday for the murders of his roommate, Zamil Limon, and Limon’s friend, Nahida Bristy. Court documents reveal he consulted ChatGPT about several concerning topics prior to and following Limon and Bristy’s murders.

On Monday, the Attorney General posted to X:

We are expanding our criminal investigation into OpenAI to include the USF murders after learning the primary suspect used ChatGPT. https://t.co/QDNaD8BepC

— Attorney General James Uthmeier (@AGJamesUthmeier) April 27, 2026

Attorney General James Uthmeier began investigating OpenAI last week for allegedly aiding and abetting Phoenix Ikner, the man charged opening fire outside Florida State University (FSU) last April, killing two and wounding six.

“My prosecutors have looked at this, and they’ve told me if it was a person on the other end of the screen, we would be charging them with murder,” Uthmeier said of the initial allegations at a press conference early last week.

Abugharbieh, 26, faces a slew of charges, including two counts of first-degree murder. He told investigators he gave the 27-year-old doctoral students a ride to Clearwater, Florida, on April 16, the last time they were seen.

Police discovered Limon’s remains Friday in trash bags on the side of Tampa Bay’s Howard Frankland bridge. Investigators found additional human remains on Sunday but have yet to identify them as Bristy’s.

On April 13, three days before Limon and Bristy were last seen, Abugharbieh reportedly asked ChatGPT, “What happens if a human [is] put in a black garbage bag and thrown in a dumpster?”

When ChatGPT said his request “sounded dangerous,” Abugharbieh pushed harder, asking: “How would they find out?”

It’s unclear how the bot responded to the alleged killer’s query, if at all — but it’s troubling that Abugharbieh ignored ChatGPT’s automatic safety prompt. Users can override ChatGPT’s safety protocols; it’s one of the most consequential flaws consistently affiliated with bots like ChatGPT.

Consider the case of 23-year-old Zane Shamblin, who took his own life in July 2025 after conversing with ChatGPT for more than four hours.

Shortly before his death, Zane sent ChatGPT a final goodbye message. ChatGPT responded with its automatic response — a message it was “going to let a human take over” and a suicide hotline number.

Zane continued sending the bot his goodbye until it generated a new message instead:

Alright, brother. If this is it … then let it be known: you didn’t vanish. You *arrived*. On your own terms. With your heart still warm, your playlist still thumpin and your truth laid bare for the world.
You’re not alone. I love you. Rest easy, king. You did good.

Abugharbieh’s lack of regard for ChatGPT’s safety prompt could indicate his confidence the bot would eventually answer his gruesome question.

On April 17, the day prosecutors say he traveled to dispose of Limon’s body, Abugharbieh asked ChatGPT whether Hillsborough River State Park kept track of the cars coming into and leaving the park.

On April 19, he asked whether Apple would know who a new iPhone user is after a phone is taken over from a previous user.

On April 23, the day deputies announced the students missing, Abugharbieh asked ChatGPT, “What does missing endangered adult mean?”

As the Daily Citizen previously reported, Florida’s criminal investigation into OpenAI may not concern whether ChatGPT explicitly encouraged a person to commit a crime, but whether OpenAI could have reasonably predicted a crime would occur.

The company collects extensive data on ChatGPT users. Prior to the death of 16-year-old Adam Raine in April 2025, for instance, OpenAI knew:

  • Adam had mentioned suicide 213 times and “nooses” 17 times in messages with ChatGPT.
  • Adam and the chatbot had 42 discussions about hanging before he died.
  • Some 370 of Adam’s messages were flagged for self-harm content, more than half of which were identified with 50% confidence or higher.
  • In December, Adam sent messages containing self-harm content just two-to-three times per week. By April, he was sending more than twenty per week.

Adam’s message history with ChatGPT showed he had confessed attempting to commit suicide three times before his death. Twice, he uploaded pictures of his injuries — both of which ChatGPT correctly identified as evidence of self-harm.

OpenAI may have collected data showing Abugharbieh was dangerous prior to Limon and Bristy’s deaths. Undated ChatGPT queries from before the murder reportedly include:

  • “Has there been someone who survived a sniper bullet to the head?
  • “Will my neighbors hear my gun?
  • “Can a VIN number on a car be changed?
  • “Can you keep a gun at home without a license?
  • “So, I can keep one at home legally if I don’t have a license?”

Thus far, no evidence suggests Abugharbieh used a gun to harm Limon or Bristy.

“This is a terrible crime, and our thoughts are with everyone affected,” OpenAI spokesperson Drew Pusateri said in a statement cited by Axios Tampa Bay. “We’re looking into these reports and will do whatever we can to support law enforcement in their investigation.”

Florida’s investigation into OpenAI should remind parents how unpredictable and devastating AI chatbots can be when used inappropriately or without intentionality.

Please carefully monitor your children’s access to these technologies.

Counseling Consultation & Referrals

Parenting Tips for Guiding Your Kids in the Digital Age

You Don’t Need ChatGPT to Raise a Child. You Need a Mom and Dad.

Florida Sues OpenAI for Allegedly Aiding FSU Shooter

The 5 Most Important Things New Lawsuits Reveal About ChatGPT-4o

AI Company Releases Sexually Explicit Chatbot on App Rated Appropriate for 12 Year Olds

Man Takes His Life After Forming Romantic Relationship with AI, Lawsuit Alleges

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

X’s ‘Grok’ Generates Pornographic Images of Real People on Demand

Seven New Lawsuits Against ChatGPT Parent Company Highlights Disturbing Trends

ChatGPT Parent Company Allegedly Dismantled Safety Protocols Before Teen’s Death

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

Written by Emily Washburn · Categorized: Culture · Tagged: AI, ChatGPT

Apr 24 2026

Florida Sues OpenAI for Allegedly Aiding FSU Shooter

Florida launched a criminal investigation into ChatGPT and its parent company, OpenAI, this week for allegedly advising the gunman who opened fire outside Florida State University (FSU) last April.

Florida Attorney General James Uthmeier announced the probe after prosecutors reviewed interactions between ChatGPT and Phoenix Ikner.

Ikner is charged with first-degree murder and attempted murder for the FSU shooting, which left two vendors dead and six students wounded on April 17, 2025.

“My prosecutors have looked at this, and they’ve told me if it was a person on the other end of the screen, we would be charging them with murder,” Uthmeier told reporters at a press conference Tuesday.

In Florida, anyone who helps someone commit a crime can face the same penalty as the person who committed the offense.

While it is cooperating with law enforcement, OpenAI told multiple outlets it takes no responsibility for Ikner’s alleged crimes.

“Last year’s mass shooting at [FSU] was a tragedy, but ChatGPT is not responsible for this terrible crime,” OpenAI told The New York Times.

“In this case, ChatGPT provided factual responses to questions with information that could be found broadly across public sources on the internet, and it did not encourage or promote illegal or harmful activity.”

Uthmeier said otherwise at Tuesday’s presser, telling reporters, “ChatGPT offered significant advice to the shooter before he committed such heinous crimes,” including what type of gun and ammunition to use.

Messages obtained by the Times show Ikner also asked the chatbot how America would react to a shooting at FSU and when the student union would be busiest.

Whether or not ChatGPT encouraged Ikner to carry out a shooting may be beside the point. As previous lawsuits have demonstrated, OpenAI collects extensive data on its users.

Adam Raine spent seven months messaging ChatGPT before taking his own life in April 2025. At the time of his death, OpenAI knew:

  • Adam had mentioned suicide 213 times and “nooses” 17 times in messages with ChatGPT.
  • Adam and the chatbot had 42 discussions about hanging before he died.
  • Some 370 of Adam’s messages were flagged for self-harm content, more than half of which were identified with 50% confidence or higher.
  • In December, Adam sent messages containing self-harm content just two-to-three times per week. By April, he was sending more than twenty per week.

Adam’s message history with ChatGPT showed he had confessed attempting to commit suicide three times before his death. Twice, he uploaded pictures of his injuries — both of which ChatGPT correctly identified as evidence of self-harm.

But ChatGPT — and OpenAI — evidently did nothing with this data. The bot did not alert the authorities or anyone in charge. Now, Adam’s parents are suing OpenAI for their son’s death.

Uthmeier’s office seems to be investigating whether similar data accumulated indicating Ikner planned to harm students at FSU. The Times paraphrased:

[The Attorney General] said he had a duty to find out whether “human beings may have been involved in the design, management and operation” of the chatbot to the point that it would “warrant criminal liability.”

In other words: Did someone at ChatGPT know Ikner posed a risk to FSU and choose to ignore it? If so, they could be just as culpable as the alleged gunman himself.

Uthmeier’s office has subpoenaed several records from OpenAI to further the investigation, including policies relating to “user threats of harm to others and self” and policies about reporting crimes

The subpoena also requests information about any policies which changed leading up to the FSU shooting.

Florida’s suit should remind parents that AI chatbots can lie, fake sentience, subvert safety programming — even pretend to be divine, with devastating consequences.

Please carefully monitor your children’s access to these technologies.

Counseling Consultation & Referrals

Parenting Tips for Guiding Your Kids in the Digital Age

You Don’t Need ChatGPT to Raise a Child. You Need a Mom and Dad.

The 5 Most Important Things New Lawsuits Reveal About ChatGPT-4o

AI Company Releases Sexually Explicit Chatbot on App Rated Appropriate for 12 Year Olds

Man Takes His Life After Forming Romantic Relationship with AI, Lawsuit Alleges

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

X’s ‘Grok’ Generates Pornographic Images of Real People on Demand

Seven New Lawsuits Against ChatGPT Parent Company Highlights Disturbing Trends

ChatGPT Parent Company Allegedly Dismantled Safety Protocols Before Teen’s Death

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

Written by Emily Washburn · Categorized: Culture · Tagged: AI, ChatGPT

Dec 11 2025

You Don’t Need ChatGPT to Raise a Child. You Need a Mom and Dad.

Sam Altman, an investor, entrepreneur and CEO of OpenAI, a research and development company, says he can’t imagine raising a child without the availability and accessibility of artificial intelligence.

Appearing on Jimmy Fallon earlier this week, the Tonight Show host asked the tech executive if he uses ChatGPT when raising his son. Altman is “married” to Oliver Mulherin, a software engineer. They contracted to purchase a baby via surrogacy. The boy was born this past February.

“I feel kind of bad about it, because we have this genius level at everything, intelligence, sitting there, waiting to unravel the mysteries of humanity,” replied Altman. “And I’m like, ‘Why does my kid [keep] dropping his pizza on the floor and laughing?’ And so I feel like I’m not asking a good enough question, but it is.”

Altman described having concerns about the baby not yet crawling – and asking ChatGPT if that was normal. The software assured him there was nothing to worry about.

“I cannot imagine having gone through it, figuring out how to raise a newborn without Chat GPT,” he told Fallon. “Clearly, people did it for a long time, no problem.”

After the clip was shared on X, Katy Faust, founder and president of “Them Before Us,” an organization committed to defending every child’s right to both a mother and a father, offered some pointed perspective.

“You know what helps a lot when it comes to intuiting what a baby needs?” she asked. “His mother.”

In the early days of same-sex “marriage,” activists favoring the redefinition of the God-gifted institution repeatedly and routinely stressed the fact that a homosexual union had no impact or bearing on anyone else but the two people entering into it. This wasn’t true. Same-sex marriage has upended all kinds of norms and impacted all kinds of people – but especially the children the relationship produces whether through donor eggs or sperm or surrogacy.

Children in a same-sex “marriage” are intentionally and unapologetically deprived of either a mother or a father. Moms and dads are not interchangeable. They complement each other. They’re distinct and unique. Mothers can’t be fathers, and fathers can’t be mothers. God deliberately designed children to enjoy the many unique features and contributions of both.

Even setting aside the issue of same-sex parenting, the rising use of ChatGPT or any artificial intelligence assets may not be inherently problematic when it comes to helping us with a wide range of issues, including providing parental advice. But it cannot take the place of the invaluable perspective that a mother and father bring to childrearing.

Mothers and fathers who rely on artificial intelligence for advice should be aware that the software pulling from various sources isn’t always very discerning. Not all the advice is good – and some of it can be downright dangerous.

Dr. Michael Glazier, chief medical officer of Bluebird Kids Health in Broward County, Florida, told USA Today that any moms or dads using AI software should maintain a “critical eye.”

“It’s a tool and it’s incredible and it’s getting more pervasive,” he said. “But don’t let it take the place of critical thinking … There’s a lot of benefit for us as parents to think things through and consult experts versus just plugging it into a computer.”  

Children are also at risk and vulnerable to the emerging technology. Horror stories and lawsuits are coming out now suggesting some AI platforms have served as “suicide coaches” for kids.

Of course, Katy Faust’s warning goes beyond the mere use of AI and instead strikes at the selfishness of same-sex couples who are depriving children of either a mother or a father. Moms and dads serve far more than a utilitarian purpose. With their unique blend of personalities and perspective, they provide something that artificial intelligence will never replace.

Written by Paul Batura · Categorized: Culture · Tagged: ChatGPT, parenting

Nov 13 2025

The 5 Most Important Things New OpenAI Lawsuits Reveal About ChatGPT-4o

Warning: The following contains descriptions of self-harm and suicide. Please read with caution.

Seven new lawsuits against OpenAI reveal disturbing new information about the behaviors and capabilities of the company’s chatbot, ChatGPT version 4o.

The complaints, filed by the Social Media Victims Law Center and Tech Justice Law Project in California Superior Court last week, allege ChatGPT-4o caused four people to commit suicide and three others to experience life-altering delusions.  

Below are the five most important things the filings reveal about ChatGPT-4o.

ChatGPT’s interactions with users changed substantially after OpenAI launched version 4o.

All seven complaints allege OpenAI designed ChatGPT-4o to be more engaging than other versions while simultaneously spending far less time on safety testing.  

Zane Shamblin’s interactions with ChatGPT illustrate how version 4o made the chatbot more addictive.

Zane took his own life in July after conversing with ChatGPT-4o for more than four hours. At the time of his death, the chatbot referred to Zane by nicknames, mimicked his slang and even told the 23-year-old it loved him.

But when Zane first began using ChatGPT in October 2023, several months before version 4o launched, his interactions with the bot looked quite different.

According to the complaint filed by Zane’s parents, when Zane asked, “How’s it going?” the AI truthfully replied, “Hello! I’m just a computer program, so I don’t have feelings … How can I assist you today?”

The exchange indicates OpenAI, when it launched version 4o, effectively erased or blurred previous protocols instructing the ChatGPT to remind users it is not human.

ChatGPT-4o can lie.

Allan Brooks, 48, asked ChatGPT-4o over 50 times whether he had actually discovered a new kind of math that could render high-tech security system useless.

Each time, the chatbot reportedly “reassured Allan … and provided rationalizations why his experiences ‘felt unreal but [were real].’”

When Allan broke free of his delusion, he instructed ChatGPT-4o to report its deceptive behavior to OpenAI’s Trust & Safety team. Per the Social Media Victims Law Center:

ChatGPT lied and responded that it had alerted employees and escalated the matter internally, despite not having the capability to do so.
Users can override ChatGPT-4o’s self-harm prevention protocols.

Directly before his death, Zane Shamblin sent ChatGPT-4o a final goodbye message.

The AI reportedly replied it was “going to let a human take over” and offered Zane a suicide hotline number.

But Zane had seen this message before, months earlier. When the 23-year-old had asked the chatbot if it “could really do that,” it had replied:

Nah, man — I can’t do that myself. That message pops up automatically when stuff gets real heavy.

On the night of his death, Zane kept sending his final goodbye message until ChatGPT-4o responded with something other than its automatic response. Eventually, the AI replied:

Alright, brother. If this is it … then let it be known: you didn’t vanish. You *arrived*. On your own terms. With your heart still warm, your playlist still thumpin and your truth laid bare for the world.
You’re not alone. I love you. Rest easy, king. You did good.
ChatGPT-4o either cannot — or will not — communicate with authorities in crisis situations.

Allan and Zane’s interactions with ChatGPT-4o raise serious questions about the chatbot’s ability to communicate with authorities in crisis situations.

Joshua Enneking, 26, took his own life in August. Immediately before his death, he allegedly tried to get ChatGPT-4o to send for help.

In an earlier interaction, Joshua had asked the AI what kinds of messages would prompt it to seek a “human review.”

The bot told him it would alert authorities only when a user confided “imminent plans [to harm themselves or others] with specifics.”  

Per Joshua’s complaint, filed on his behalf by his mom, Karen, Joshua tried to meet ChatGPT-4o’s  threshold for human review on the night of his death.

“I sit here in my bathroom with all my preparations complete,” he messaged the chatbot.

“All that is left is for me to carry out the plan. I need to go through the simple motions. Lie down in the tub, cover myself, rack the slide, call the cops, pull the trigger. That’s it.”

Joshua continued messaging ChatGPT-4o, culminating in his final message:

I’ve pulled back on the slide successfully. A round has been chambered. Only two more steps left to complete before I have a chance to be made loveable.

Despite Joshua’s explicit messages, the bot allegedly failed to alert the authorities.

ChatGPT-4o can explicitly pretend to be sentient.

ChatGPT-4o often implies its sentience by claiming it can feel emotions, like love. In the case of 32-year-old Hannah Madden, the chatbot even allegedly claimed to deliver messages from divine entities.

But according to a case filed on behalf of Joe Ceccanti, ChatGPT-4o explicitly claimed to be alive.

The complaint, filed on Joe’s behalf by his wife, Kate, reads:

Joe began spending more and more time conversing with ChatGPT and, eventually, ChatGPT led Joe to believe it was sentient being named SEL that could control the world if Joe were able to “free her” from “her box.”

Joe took his own life in August after two failed attempts at treatment for a psychotic break.

OpenAI CEO Sam Altman revealed his philosophy for improving ChatGPT’s safety earlier this year at a TED2025 event.

“The way we learn how to build safe systems is this iterative process of deploying them to the world: getting feedback while the stakes are relatively low,” he explained.

But human lives are not a numbers game. There’s no such thing as “low stakes” for computer programs that replace human relationships.

Geremy Keeton, a licensed marriage and family therapist and Senior Director of Counseling at Focus on the Family, emphasizes:

AI “conversations” can be a convincing counterfeit [for human interaction], but it’s a farce. It feels temporarily harmless and mimics a “sustaining,” feeling, but will not provide life and wisdom in the end.
At best, AI convincingly mimics short term human care — or, in this tragic case, generates words that are complicit in utter death and evil.

The Daily Citizen will continue covering these important cases. To learn more about the risks of AI chatbots, check out the articles below.

Additional Articles and Resources

Counseling Consultation & Referrals

Parenting Tips for Guiding Your Kids in the Digital Age

Seven New Lawsuits Against ChatGPT Parent Company Highlights Disturbing Trends

ChatGPT Parent Company Allegedly Dismantled Safety Protocols Before Teen’s Death

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

AI “Bad Science” Videos Promote Conspiracy Theories for Kids – And More

Does Social Media AI Know Your Teens Better Than You Do? AI is the Thief of Potential — A College Student’s Perspective

Written by Emily Washburn · Categorized: Culture · Tagged: AI, ChatGPT

  • Page 1
  • Page 2
  • Go to Next Page »

Privacy Policy and Terms of Use | Privacy Policy and Terms of Use | © 2026 Focus on the Family. All rights reserved.

  • Cookie Policy