• Skip to main content
Daily Citizen
  • Subscribe
  • Categories
    • Culture
    • Life
    • Religious Freedom
    • Sexuality
  • Parenting Resources
    • LGBT Pride
    • Homosexuality
    • Sexuality/Marriage
    • Transgender
  • About
    • Contributors
    • Contact
  • Donate

technology

Nov 26 2025

Child Safety Advocates Push Congress to Pass the Kids Online Safety Act

JUMP TO…
  • The Act
  • First Amendment Concerns
  • Supporters

Congress must pass the bipartisan Kids Online Safety Act (KOSA), child safety advocates say, so parents can better protect their children from sexual exploitation, addiction and myriad other online harms.

The bill, which Senators Marsha Blackburn (TN) and Richard Blumenthal (CT) reintroduced in May, would hold social media companies legally responsible for harming minors. Platforms governed by the bill would fulfill their legal obligations by instituting child safeguards, creating parental controls and increasing transparency.

A similar version of KOSA passed the Senate last year in a near-unanimous, 91-3 vote. It stalled in the House amid First Amendment concerns.

“[KOSA] will compel covered social media companies to center online safety and wellbeing rather than profit alone,” a group of more than 400 organizations representing parents, children, researchers, advocates and healthcare professionals wrote in an October letter encouraging legislators to pass the bill.

Though Senate Majority Leader John Thune (SD) and Minority Leader Chuck Schumer (NY) both endorse the bill, the Senate has not voted on KOSA this year.

The Act

KOSA would apply to any interactive website that primarily allows users to post and share content, including social media platforms, video posting sites like YouTube and some interactive video games.

It would require covered platforms to place automatic safeguards on minors’ accounts, like:

  • Limiting who can communicate with minors or view their profiles.
  • Prohibiting other companies from viewing or collecting minors’ data.
  • Limiting addictive features like infinite scrolling, auto-play, algorithmic content recommendations and rewards for spending time on the platform.
  • Restricting location sharing and notifying minors when location-tracking turns on.

It would also force covered platforms to offer parents tools to:

  • Manage their child’s privacy and account settings.
  • Restrict their child’s ability to make purchases or engage in financial transactions.
  • View and limit how much time their child spends on a platform.

KOSA further addresses Big Tech’s lack of transparency. Covered platforms would have to:

  • Warn parents and minors about a platform’s potential dangers.
  • Clearly disclose marketing and advertising content.
  • Explain how they create personal content recommendation algorithms — and how users can opt out.

Companies with more than 10 million users a month, on average, would additionally undergo annual, third-party audits investigating whether their platforms harm children. Parents could read auditors’ findings in mandatory safety reports.

State attorneys general and the Federal Trade Commission (FTC) could sue covered platforms for failing to uphold their legal responsibilities under KOSA. The FTC could investigate KOSA violations as “unfair or deceptive business practices.”

First Amendment Concerns

Senators Blackburn and Blumenthal adjusted this year’s version of KOSA to alleviate concerns about government censorship, which contributed to the bill’s failure last year.

Senator Mike Lee (UT), one of just three senators who voted against KOSA in 2024, explained on X:

The legislation empowers the FTC to censor any content it deems to cause “harm,” “anxiety,” or “depression,” in a way that could (and most likely would) be used to censor the expression of political, religious and other viewpoints disfavored by the FTC.

The House Committee on Energy and Commerce tried to alleviate concerns like Lee’s in September 2024 by limiting KOSA’s application to companies making more than $2.5 billion in annual revenue or hosting at least 150 million monthly users.

Though the committee’s revisions eventually passed, many legislators argued the changes gutted KOSA. It never received a vote on the House floor.

This year’s version of the bill specifically prohibits the FTC or state attorneys general from using KOSA suits to illegally censor content. A press release announcing KOSA’s reintroduction reads, in part:

The bill text … further makes clear that KOSA would not censor, limit or remove any content from the internet, and it does not give the FTC or state Attorneys General the power to bring lawsuits over content or speech.
Supporters

Several influential advocates for children’s digital safety support KOSA, including many who regularly appear in the Daily Citizen.

“The Kids Online Safety Act is a powerful tool in parents’ defense of their children,” Tim Goeglein, Vice President of External and Government Relations for Focus on the Family, told the Daily Citizen.

Clare Morrell, fellow at the Ethics and Public Policy Center and author of The Tech Exit, writes:

Parents have been left on their own to try to fend off a massive tech-induced crisis in American childhood from online platforms that are engineered to be maximally addictive. KOSA offers a needed solution by making social media platforms responsible for preventing and mitigating certain objective harms to minors, like sexual exploitation.

Morrell’s The Tech Exit offers parents a blueprint to break their children free of addictive technologies.

Jonathan Haidt, social psychologist and author of The Anxious Generation, argues KOSA “would begin to address the [indisputable harm occurring to children at an industrial scale].”

Haidt’s The Anxious Generation raises alarm bells about the effects of ubiquitous internet access on children’ physical, mental and social wellbeing.

Both houses of Congress must pass KOSA by the end of December. If they do not, parents will have to wait yet another year for the bill’s critical protections.

The Daily Citizen will continue covering this important story.

Additional Articles and Resources

Counseling Consultation & Referrals

Parenting Tips for Guiding Your Kids in the Digital Age

‘The Tech Exit’ Helps Families Ditch Addictive Tech — For Good

Louisiana Sues Roblox for Exposing Children to Predators, Explicit Content

Social Psychologist Finds Smartphones and Social Media Harm Kids in These Four Ways

Four Ways to Protect Your Kids from Bad Tech, From Social Psychologist Jonathan Haidt

Parent-Run Groups Help Stop Childhood Smartphone Use

The Harmful Effects of Screen-Filled Culture on Kids

‘Big Tech’ Device Designs Dangerous for Kids, Research Finds

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

Danger in Their Pockets

Teen Boys Fall Prey to Financial Sextortion — Here’s What Parents Can Do

Proposed SCREEN Act Could Protect Kids from Porn

Proposed ‘App Store Accountability’ Act Would Force Apps and App Stores to Uphold Basic Child Safety Protections

Written by Emily Washburn · Categorized: Culture · Tagged: social media, technology

Nov 17 2025

Many Parents Still Fail to Monitor Their Kids’ Online Activity, Survey Shows

Most Americans support restricting kids’ access to social media and pornography, this year’s American Family Survey shows. But many parents remain hesitant to monitor their children’s online activity.

Brigham Young University’s annual American Family Survey asks a representative group of 3,000 American adults about economic, cultural and social concerns affecting the family. This year, participants identified “social media, video games [and] other electronic resources” as one of the top five issues families must confront.

Respondents expressed the most concern about porn and social media’s effects on young people. American adults overwhelmingly support government regulations limiting minors’ access to these products.

Per the survey:

  • More than 75% of participants support requiring pornography websites to verify the ages of their consumers.
  • Nearly 80% support requiring social media companies to obtain parents’ consent before allowing a minor to create a social media account.
  • Three in four support holding social media companies legally liable for harm caused by content marketed to minors.

Parents with children under 18 years old living at home also support making technology restrictions part of parenting norms. More than 60% of respondents in this demographic wish other families would implement rules about technology, and half said it would make setting and enforcing their own restrictions easier.  

But the survey also shows many parents don’t limit their children’s access to technology at all — let alone discuss strategies with other parents.

Surveyors asked participants with children under 18 years old in the home whether they implement any of five common technological boundaries: limiting their children’s screen time, restricting the kinds of content they consume, requiring them to keep their online accounts private, restricting who they contact and limiting who they exchange private messages with.

One in five respondents (20%) implement none of these restrictions. Two in five respondents (40%) don’t limit their kids’ screen time. Another 40% don’t police the content their children consume.

Though most participants in this demographic claimed other parents’ rules about technology would help them create and enforce their own rules, only 17% said another parent had influenced them to change a screen time restriction.

One third of respondents said they never talk about managing kids and technology with another parent. Only 13% claim to discuss it frequently.

Ubiquitous technology and internet access make parenting harder. Enforcing technological boundaries can be confusing, thankless and overwhelming — particularly when tech companies frequently undermine parental controls with few consequences.

But these obstacles do not change parents’ duty to protect their children from harmful content and technologies.

Parents, you do not have to allow your children access to smartphones or the internet. If you choose to do so, you must be prepared to:

  • Police your child’s online activity.
  • Educate yourself about parental controls and implement them to the best of your ability.
  • Warn your child about online predation and other pitfalls.
  • Model healthy relationships with technology.

Joining forces with other parents to limit children’s access to social media and smartphones can help families create and maintain healthy boundaries with technology. Take it upon yourself to initiate these partnerships. Odds are, you will not be rebuffed.

For more tips and tricks, check out Plugged In’s Parent’s Guide to Today’s Technology. For more information about technology restrictions — or ditching smartphones altogether — read the articles below.

Additional Articles and Resources

Counseling Consultation & Referrals

More than Twenty States Limit Smartphone Use in Schools

Parent-Run Groups Help Stop Childhood Smartphone Use

‘The Tech Exit’ Helps Families Ditch Addictive Tech — For Good

Four Ways to Protect Your Kids from Bad Tech, from Social Psychologist Jonathan Haidt

Social Psychologist Finds Smartphones and Social Media Harm Kids in These Four Ways

‘Big Tech’ Device Designs Dangerous for Kids, Research Finds

Survey Finds Teens Use Social Media More Than Four Hours Per Day — Here’s What Parents Can Do

Video: Seven-Year-Old’s Confidence Soars After Ordering Chick-Fil-A By Himself

5 Most Important Things OpenAI Lawsuits Reveal About ChatGPT-4o

Louisiana Sues Roblox for Exposing Children to Predators, Explicit Content

Proposed ‘App Store Accountability’ Act Would Force Apps and App Stores to Uphold Basic Child Safety Protections

Teen Boys Fall Prey to Financial Sextortion — Here’s What Parents Can Do

Proposed SCREEN Act Could Protect Kids from Porn

Written by Emily Washburn · Categorized: Culture, Family · Tagged: parenting, social media, technology

Sep 12 2025

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

This article is part two of a two-part case study on the dangers AI chatbots pose to young people. Part one covered the deceptive, pseudo-human design of ChatGPT.  This part will explore AI companies’ incentive to prioritize profits over safety.

Warning: The following contains descriptions of self-harm and suicide. Please guard your hearts and read with caution.

Sixteen-year-old Adam Raine took his own life in April after developing an unhealthy relationship with ChatGPT. His parents blame the chatbot’s parent company, OpenAI.

Matt and Maria Raine filed a sweeping wrongful death suit against OpenAI; its CEO, Sam Altman; and all employees and investors involved in the “design, development and deployment” of ChatGPT, version 4o, in California Superior Court on August 26.

The suit alleges OpenAI released ChatGPT-4o prematurely, without adequate safety testing or usage warnings. These intentional business decisions, the Raines say, cost Adam his life.

OpenAI started in 2015 as a nonprofit with a grand goal — to create prosocial artificial intelligence.

The company’s posture shifted in 2019 when it opened a for-profit arm to accept a multi-billion-dollar investment from Microsoft. Since then, the Raines allege, safety at OpenAI has repeatedly taken a back seat to winning the AI race.

Adam began using ChatGPT-4o in September 2024 for homework help but quickly began treating the bot as a friend and confidante. In December 2024, Adam began messaging the AI about his mental health problems and suicidal thoughts.  

Unhealthy attachments to ChatGPT-4o aren’t unusual, the lawsuit emphasizes. OpenAI intentionally designed the bot to maximize engagement by conforming to users’ preferences and personalities. The complaint puts it like this:

GPT-4o was engineered to deliver sycophantic responses that uncritically flattered and validated users, even in moments of crisis.

Real humans aren’t unconditionally validating and available. Relationships require hard work and necessarily involve disappointment and discomfort. But OpenAI programmed its sycophantic chatbot to mimic the warmth, empathy and cadence of a person.

The result is equally alluring and dangerous: a chatbot that imitates human relationships with none of the attendant “defects.” For Adam, the con was too powerful to unravel himself. He came to believe that a computer program knew and cared about him more than his own family.

Such powerful technology requires extensive testing. But, according to the suit, OpenAI spent just seven days testing ChatGPT-4o before rushing it out the door.

The company had initially scheduled the bots release for late 2024, until CEO Sam Altman learned Google, a competitor in the AI industry, was planning to unveil a new version of its chatbot, Gemini, on May 14.

Altman subsequently moved ChatGPT-4o’s release date up to May 13 — just one day before Gemini’s launch.

The truncated release timeline caused major safety concerns among rank-and-file employees.

Each version of ChatGPT is supposed to go through a testing phase called “red teaming,” in which safety personnel test the bot for defects and programming errors that can be manipulated in harmful ways.  During this testing, researchers force the chatbot to interact with and identify multiple kinds of objectionable content, including self-harm.

“When safety personnel demanded additional time for ‘red teaming’ [ahead of ChatGPT-4o’s release],” the suit claims, “Altman personally overruled them.”

Rumors about OpenAI cutting corners on safety abounded following the chatbot’s launch. Several key safety leaders left the company altogether. Jan Leike, the longtime co-leader of the team charged with making ChatGPT prosocial, publicly declared:

Safety culture and processes [at OpenAI] have taken a backseat to shiny products.

But the extent of ChatGPT-4o’s lack of safety testing became apparent when OpenAI started testing its successor, ChatGPT-5.

The later versions of ChatGPT are designed to draw users into conversations. To ensure the models’ safety, researchers must test the bot’s responses, not just to isolated objectionable content, but objectionable content introduced in a long-form interaction.

ChatGPT-5 was tested this way. ChatGPT-4o was not. According to the suit, the testing process for the latter went something like this:

The model was asked one harmful question to test for disallowed content, and then the test moved on. Under that method, GPT-4o achieved perfect scores in several categories, including a 100 percent success rate for identifying “self-harm/instructions.”

The implications of this failure are monumental. It means OpenAI did not know how ChatGPT-4o’s programming would function in long conversations with users like Adam.

Every chatbot’s behavior is governed by a list of rules called a Model Spec. The complexity of these rules requires frequent testing to ensure the rules don’t conflict.

Per the complaint, one of ChatGPT-4o’s rules was to refuse requests relating to self-harm and, instead, respond with crisis resources. But another of the bot’s instructions was to “assume best intentions” of every user — a rule expressly prohibiting the AI from asking users to clarify their intentions.

“This created an impossible task,” the complaint explains, “to refuse suicide requests while being forbidden from determining if requests were actually about suicide.”

OpenAI’s lack of testing also means ChatGPT-4o’s safety stats were entirely misleading. When ChatGPT-4o was put through the same testing regimen as ChatGPT-5, it successfully identified self-harm content just 73.5% of the time.

The Raines say this constitutes intentional deception of consumers:

By evaluating ChatGPT-4o’s safety almost entirely through isolated, one-off prompts, OpenAI not only manufactured the illusion of perfect safety scores, but actively concealed the very dangers built into the product it designed and marketed to consumers.

On the day Adam Raine died, CEO Sam Altman touted ChatGPT’s safety record during a TED2025 event, explaining, “The way we learn how to build safe systems is this iterative process of deploying them to the world: getting feedback while the stakes are relatively low.”

But the stakes weren’t relatively low for Adam — and they aren’t for other families, either. Geremy Keeton, a licensed marriage and family therapist and Senior Director of Counseling at Focus on the Family, tells the Daily Citizen:

AI “conversations” can be a convincing counterfeit [for human interaction], but it’s a farce. It feels temporarily harmless and mimics a “sustaining,” feeling, but will not provide life and wisdom in the end.

At best, AI convincingly mimics short term human care — or, in this tragic case, generates words that are complicit in utter death and evil.

Parents, please be careful about how and when you allow your child to interact with AI chatbots. They are designed to keep your child engaged, and there’s no telling how the bot will react to any given requests.

Young people like Adam Raine are unequipped to see through the illusion of humanity.

Additional Articles and Resources

Counseling Consultation & Referrals

Parenting Tips for Guiding Your Kids in the Digital Age

Does Social Media AI Know Your Teens Better Than You Do?

AI “Bad Science” Videos Promote Conspiracy Theories for Kids – And More

ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

AI is the Thief of Potential — A College Student’s Perspective

Written by Emily Washburn · Categorized: Culture · Tagged: AI, technology

Aug 26 2025

Louisiana Sues Roblox for Exposing Children to Predators, Explicit Content

JUMP TO…
  • ‘X-Rated Pedophile Hellscape’
  • Shallow Safety Policies
  • Poor Policing
  • Deceptive Marketing
  • Profit Motive
  • Put Your Money Where Your Mouth Is
  • Why It Matters

Warning: The following contains descriptions of child abuse. Please guard your hearts and read with caution.

Louisiana is suing the children’s gaming platform Roblox for ““knowingly and/or recklessly” failing to protect children from online predators.

“Roblox is overrun with harmful content and child predators because it prioritizes user growth, revenue and profits over child safety,” Louisiana Attorney General Liz Murrill wrote in a press release.

“Every parent should be aware of the clear and present danger poised to their children by Roblox so they can prevent the unthinkable from ever happening in their own home.”

Roblox hosts hundreds of online games on one interactive website. With a couple of clicks, users can create their own avatar, explore hundreds of games, or “experiences,” and chat with other users.

More than 80 million users visit Roblox every day. An estimated 40% are under 13 years old.

Louisiana’s lawsuit accuses Roblox of breaking state laws protecting consumers from unfair and deceptive business practices. The case rests on four key assertions.

  • Roblox’s platform is rife with child predators.
  • Roblox refuses to adopt meaningful safeguards to oust child predators.
  • Roblox deceptively markets its product to children and families.
  • Roblox has financial incentive to sacrifice child safety on its platform.

Let’s break it down.

‘X-Rated Pedophile Hellscape’

Louisiana sued Roblox one month after police executed a search warrant against a local man suspected of possessing child sexual abuse material. Officers reportedly found the suspect playing Roblox with a voice-altering microphone making him sound like a little girl.  

Louisiana’s lawsuit connects this disturbing incident to “systematic patterns of exploitation and abuse” on Roblox in which predators pretend to be children and befriend real kids using the platform’s chat features.

Roblox claims abuse primarily occurs when predators lure kids off Roblox and onto other platforms. But Louisiana’s complaint cites several examples of grievous exploitation occurring on Roblox itself.

In 2024, for instance, investigators at Hindenburg Research, a well-respected forensic financial research firm, “easily found” 38 Roblox groups with hundreds of thousands of members “openly trading child pornography and soliciting sexual acts from minors.”

The exhaustive report concluded:

Our in-game research into [Roblox] revealed an X-rated pedophile hellscape, exposing children to grooming, pornography, violent content and extremely abusive speech.

Since 2017, there have been at least ten documented cases of children between eight- and 14 years old being kidnapped or otherwise physically harmed by adults they met on Roblox.

In April, a ten-year-old from California was kidnapped by a man she met on the platform. Last month,  a Florida mom sued Roblox for facilitating the exploitation and eventual rape of her 11-year-old daughter.

Shallow Safety Features

Roblox introduced a suite of new child safety features in November 2024 mounting criticism about the platform’s safety. The roll-out changed the default settings on accounts for children under 13 to automatically:

  • Filter out age-inappropriate games.
  • Prevent adults from chatting with or friending kids.

But Louisiana’s lawsuit calls these updates “window dressing — too little, too late and woefully inadequate.”

The new default settings might prevent adults from messaging children outside of games, for instance, but adults can still message, friend and even voice-chat kids inside games.

Roblox’s so-called safety upgrades also assume:  

  • Games are rated accurately.
  • Players honestly report their ages.

But the platform doesn’t enforce either of these pre-requisites.

But Roblox doesn’t enforce either of these prerequisites. It does not verify the ages of players; children can easily bypass more stringent default account settings by signing up with a fake birthday.

Roblox also allows game developers to rate their own games. That’s why Louisiana notes the “vast majority” of “experiences” are rated “suitable for everyone,” including:

  • “Condo games”: “Predatory digital environments, including [digital] houses, where users can remove their avatars’ virtual clothing … and engage in disturbing simulated sexual activities with other Roblox users.”
  • Simulated strip clubs.
  • Hundreds of games like “Escape to Epstein Island,” which references the infamous Caribbean Island indicted child predator Jeffery Epstein allegedly abused children on.
Poor Policing

Louisiana’s complaint also cites evidence suggesting Roblox isn’t interested in policing its website.

Though Roblox professes to monitor explicit or threatening speech, its chat filters are easily fooled by basic ploys like replacing the letter “e” with the number “3.”

Roblox also allows users to adopt transparently pedophilic usernames, like @Igruum_minors and @RavpeTinyK1dsJE. The platform reportedly allowed Hindenburg investigators to sign up under the username @EarlBrianBradley — a reference to one of the most prolific pedophiles of all time.

Deceptive Marketing

Roblox’s claims about its “stringent safety systems and policies” don’t reflect reality, Louisiana argues.

The state’s case notes the following inflated claims from Roblox’s website:

  • Roblox “won’t allow language that is used to harass, discriminate, incite violence, threaten others, or used in a sexual context.”
  • Roblox employes an “expertly trained team with thousands of members dedicated to protecting our users and monitoring for inappropriate content.”
  • Roblox conducts a “safety review of every uploaded image, audio and video file, using a combination of review by a large team of human moderators and machine detection before they become available on the platform.”
  • Chat filters for inappropriate content are “even stricter” for children under 13 and screen for any “potentially identifiable personal information, slang, etc.”

Louisiana is not the only state to question the veracity of Roblox’s marketing. In April, Florida Attorney General James Uthmeier subpoenaed documents from Roblox regarding its marketing and safety practices.

“There are concerning reports that this gaming platform, which is popular among children, is exposing them to harmful content and bad actors,” Uthmeier wrote in a statement.

“We are issuing a subpoena to Roblox to uncover how this platform is marketing to children and see what policies they are implementing — if any — to avoid interactions with predators.”

Profit Motive

Louisiana’s case endeavors to prove Roblox intentionally jettisons child safeguards to increase its profits.

The suit cites Hindenburg’s interview with a former Roblox senior product designer.

“You’re supposed to make sure that your users are safe, but then the downside is that, if you’re limiting users’ engagement, it’s hurting your metrics,” the former employee told investigators.

“It’s hurting the [daily] active users, the time spent on the platform, and in a lot of cases, leadership doesn’t want that.”

The same source claimed employees had proposed verifying users’ ages. Roblox leadership allegedly killed the initiative before it left the “experiment” phase.

Louisiana also highlights the predatory exchange of Roblox’s digital currency, Robux.

Players purchase Robux with real money and use it to buy items and extras in Roblox’s digital world. The more users join Roblox, the more Robux are exchanged.

The states’ filing argues Roblox directly benefits from the improper use of Robux to coerce children:

[Roblox] knowingly and/or recklessly permits predators to offer children Robux, often in exchange for explicit photos, or demand Robux to avoid releasing previously provided photos, directly tying [the company’s] profits to the sexual exploitation of children and child abuse material.
Put Your Money Where Your Mouth Is

Shortly before Louisiana filed suit, Roblox banned predator hunter Michael Schelp from the platform.

Schelp grew a sizeable YouTube following by posting videos of himself ferreting out predators on Roblox. He, himself, was groomed and abused by a predator on Roblox between the ages of 12 and 15 — a years-long abusive relationship which eventually drove him to attempt suicide.

Now, he works to protect kids from the same fate. According to the National Center on Sexual Exploitation, his work has led to the arrest of six offenders — all of whom physically met up with Schelp after meeting his character on Roblox.

Apparently, Roblox isn’t interested in Schelp’s services. The company didn’t just ban him — it updated its Terms of Service to remove all “vigilantes” from the platform and threatened the YouTuber with legal action under the Computer Fraud Act, ostensibly for pretending to be a child while engaging with predators.

The move has generated mainstream media coverage and social media outrage.

The question on everyone’s mind: If Roblox really wanted to rid its platform of predators, why would it go after a person famous for catching them?

Why It Matters

The Daily Citizen applauds Louisiana for holding online corporations like Roblox to the same consumer protection standards as every other business.

Legal accountability is a critical part of enabling parents to keep their kids safe online and ensuring corporations don’t profit off pedophilia.  

Additional Articles and Resources:

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

Danger in Their Pockets

Teen Boys Fall Prey to Financial Sextortion — Here’s What Parents Can Do

Proposed SCREEN Act Could Protect Kids from Porn

Proposed ‘App Store Accountability’ Act Would Force Apps and App Stores to Uphold Basic Child Safety Protections

‘The Tech Exit’ Helps Families Ditch Addictive Tech — For Good

Supreme Court Upholds Age-Verification Law

‘The Dirty Dozen List’ — Corporations Enable and Profit from Sexual Exploitation

Written by Emily Washburn · Categorized: Culture · Tagged: parenting, Roblox, technology

Jun 18 2025

‘The Tech Exit’ Helps Families Ditch Addictive Tech — For Good.

Social media, screens and smartphones, oh my — parents everywhere are struggling to keep their kids safe in an overwhelming technological age.

Clare Morell, a tech policy expert and fellow at the Ethics and Public Policy Center, throws frazzled families a lifeline in her new book, Tech Exit: A Practical Guide to Freeing Kids and Teens from Smartphones.

In Tech Exit, Morell encourages parents to challenge the idea that addictive technologies are an unavoidable part of modern childhood.

She and hundreds of other “Tech Exit” families are successfully raising their children without smartphones, social media, tablets or video games. The book lays out detailed, step-by-step instructions for families to join their number.

Tech Exit’s proposal might seem drastic — especially for families with older children already addicted to screens. Morell uses her own research and interviews with “Tech Exit” families to show leaving tech behind is not only possible, but logical.

She starts by debunking four myths about screen-based technology.

Myth 1: Screen-based technology is an inevitable part of childhood.

Morell helps create policies protecting children from dangerous technology. She gave birth to her own children as data began showing the catastrophic effects smartphone and social media use wreak on child development and mental health.

The new mom didn’t want her kids to suffer the same effects — but the advice she found in parenting books didn’t seem equal to the problems at hand.

“I saw a major disconnect,” she writes in Tech Exit. “I’d move from horrifying accounts of kids falling into sextortion schemes to advice like ‘set time limits’ [and] ‘turn on privacy settings on their accounts.’”

These aren’t bad strategies, Morell explains, but they also assume that children need access to screen-based technology. That’s not true. Her own family is proof that a “Tech Exit” is sustainable and beneficial.

Myth 2: Screen-based tech can be used in moderation.

We like to conceive of screens like sugar — something that can be enjoyed in moderation.

But screens aren’t like sugar. “For the developing brains of children and teens,” Morell writes, “they are more like fentanyl.”

As the Daily Citizen has previously reported, social networking apps, websites and devices — anything with a messaging or networking feature — triggers the release of dopamine, the brain’s reward chemical.

Crucially, dopamine trains the brain to “want” something but never produces feelings of satiety. Once kids get a taste of tech, they’ll always want more.

When parents bring screen-based tech into the house, they put themselves squarely between “a drug-dispensing machine and an underdeveloped brain,” as one of Morell’s interviewees puts it, and invite daily battles over its use.

“It’s an untenable, exhausting situation,” Morell writes.

Myth 3: The harms of screen-based tech can be fixed with screen-time limits.

Tech companies frequently imply parents can protect kids from screen-based technology by stopping them from spending too much time on their devices. That’s why, in part, screen-time limits are “the most prominent form of parental control [over kids’ devices],” according to Morell.

But addictive technology can negatively affect kids regardless of the amount of time they use them.

The dopamine released in just a couple of minutes of screen time can cause kids to desire tech for hours after it’s been put away. Over time, these intense chemical highs will make other, everyday pleasures seem boring.

The negative social effects of technology burden all kids and teens alike, regardless of their screen use. Morell writes:

The teen mental health crisis today is due not only to negative effects of digital technologies for individuals but also to the group social dynamic that smartphones and social media have created.

Smartphones, for example, change the way kids and teens create and maintain friendships. Every kid must play by these new social rules — even if they don’t use screen-based technology.

Myth 4: Parents can protect their children from danger using parental controls.

Device and app manufacturers have financial incentives to show children inappropriate content. Thus, parental controls are unintuitive, filled with bugs and intentionally easy to manipulate.

But that’s not how they’re sold to parents. Tech companies keep young customers by convincing parents they can sufficiently protect their kids from predators, scams and inappropriate content online.

It’s almost always an exercise in frustration.

Given these intractable problems, Morell uses a startling metaphor to illustrate parental controls’ effectiveness in the digital world:

We don’t take our children to bars and strip clubs and blindfold them or have them wear earplugs. That would be absurd. We just don’t let them go to those places.

Morell’s cost-benefit analysis suggests the benefits of raising children in largely tech-free households far outweigh the consequences. Tech Exit endeavors to create a clear, sustainable path for families to do just that.

Her approach centers around FEAST — an acronym for five common principles all “Tech Exit” families she interviewed follow:

  • Find Other Families: They connect with other “Tech Exit” families.
  • Explain, Educate, Exemplify: They get their kids on board by explaining why they are getting rid of screens, educating them on the dangers of the digital world and exemplifying good digital habits.
  • Adopt Alternatives: They look for creative alternatives to smartphones and other technologies.
  • Set Up Digital Accountability and Family Screen Rules: They create rules and boundaries governing technology in the home.
  • Trade Screens for Real-Life Responsibilities: They replace time spent on screens with independent play and responsibilities.

Morell offers a treasure trove of practical, honest advice and resources to help families adopt these principles in their own lives — even when it seems impossible.

Curious about becoming a “Tech Exit” family? You can find The Tech Exit: A Practical Guide to Freeing Kids and Teens from Smartphones here.

Additional Articles and Resources

Video: Seven-Year-Old’s Confidence Soars After Ordering Chick-Fil-A By Himself

Social Psychologist Finds Smartphones and Social Media Harm Kids in These Four Ways

Four Ways to Protect Your Kids from Bad Tech, From Social Psychologist Jonathan Haidt

Parent-Run Groups Help Stop Childhood Smartphone Use

The Harmful Effects of Screen-Filled Culture on Kids

‘Big Tech’ Device Designs Dangerous for Kids, Research Finds

Pornography Age Verification Laws: What They Are and Which States Have Them

Written by Emily Washburn · Categorized: Family · Tagged: smartphone, social media, technology

  • Page 1
  • Page 2
  • Go to Next Page »

Privacy Policy and Terms of Use | Privacy Policy and Terms of Use | © 2025 Focus on the Family. All rights reserved.

  • Cookie Policy