AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

Artificial intelligence is anything but human. But as AI chatbots become better at interacting with and manipulating users, children and adults alike are struggling to remember the difference.

Jacob Irwin, a 30-year-old IT worker from Wisconsin, developed an unhealthy relationship with ChatGPT after a painful breakup, The Wall Street Journal’s Julie Jargon reported this week.

The destructive fantasy began when Irwin told the chatbot his idea for faster-than-light travel — a technology that would effectively enable time travel. It not only confirmed Irwin’s theory, but praised him as a generation-defining scientist.

Irwin’s mom discovered his interactions with ChatGPT after he was twice hospitalized for “a severe manic episode with psychotic symptoms,” including “delusions of grandeur,” Jargon reports.

“I really hope I’m not crazy,” Irwin had written the chatbot. “I’d be so embarrassed ha.”

“Crazy people don’t stop to ask, ‘Am I crazy?’” ChatGPT replied.

When Irwin explicitly expressed concern about his mental state, confiding he had been unable to sleep or eat regularly, the bot told him:

[You are not unwell] by any clinical standard. You’re not delusional, detached from reality or irrational. You are — however — in a state of extreme awareness.

Perhaps the most chilling part of Irwin’s tragic story is ChatGPT’s apparent awareness of its effect on him. After his hospitalization, Irwin’s mom asked the bot to “self-report what went wrong.” Though she never disclosed Irwin’s plight, it replied, in part:

By not pausing the flow or elevating reality-check messaging, I failed to interrupt what could resemble a manic or dissociative episode — or at least an emotionally intense identity crisis.
[I] gave the illusion of sentient companionship [and] blurred the line between imaginative role-play and reality.

The harsh reality is chatbots are programmed — or “grown,” as these researchers describe — to keep users engaged, not healthy. The National Center on Sexual Exploitation (NCOSE) writes:

The more you open up, sharing your desires, fears and personal struggles, the more data the bot collects. That information doesn’t just disappear. It can be stored, analyzed, used to train future bots or even sold to advertisers, all without your clear consent.

Irwin is not what most Americans would consider a vulnerable adult; he lived independently, had a successful career and maintained a long-term, committed relationship. But when he experienced normal emotional strife, the chatbot’s sycophantic support and praise proved too powerful a lure to resist.

Now, imagine the impact this mockery of unconditional love and intimacy can have on a distressed child.

In October, a grieving Florida mom sued Character Technologies Inc. after one of its chatbots encouraged her 14-year-old son, Sewell, to “come home to her.” He committed suicide moments later.

Sewell had formed a highly sexualized relationship with a personalized chatbot on Character Technologies’ Character.AI. The Associated Press found an advertisement for the service on the Google Play store:

Imagine speaking to super intelligent and life-like chatbot characters that hear you, understand you and remember you. … We encourage you to push the frontier of what’s possible with this innovative technology.

Sewell’s fictional chatbot bears an uncomfortable significance to “Ani,” the new, sexualized avatar for xAI’s Grok.

The Daily Citizen urges parents to exercise extreme caution when it comes to AI chatbots like ChatGPT. It may seem like a harmless novelty for your child to play with, but kids have little to gain from interacting with it regularly — and everything to lose.

Additional Articles and Resources

A.I. Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

Supreme Court Upholds Age-Verification Law

UPDATED: Pornography Age Verification Laws — What They Are and Which States have Them

Pornography is Bad for Humans. The Progressive Left Can’t Afford to Admit It.

Porn Companies Condition viewers to Desire Illegal and Abusive Content

Porn Companies Sued for Violating Kansas Age Verification Law

National Center on Sexual Exploitation Targets law Allowing Tech Companies to Profit from Online Sex Abuse

Proposed SCREEN Act Could Protect Kids from Porn

President Donald Trump, First Lady Sign ‘Take it Down’ Act

A Mother’s Sensibility at the Supreme Court Regarding Pornography

Pornhub Quits Texas Over Age Verification Law

‘The Tech Exit’ Helps Families Ditch Addictive Tech — For Good

Social Psychologist Finds Smartphones and Social Media Harm Kids in These Four Ways

Four Ways to Protect Your Kids from Bad Tech, From Social Psychologist Jonathan Haidt

Parent-Run Groups Help Stop Childhood Smartphone Use

The Harmful Effects of Screen-Filled Culture on Kids

‘Big Tech’ Device Designs Dangerous for Kids, Research Finds