• Skip to main content
Daily Citizen
  • Subscribe
  • Categories
    • Culture
    • Life
    • Religious Freedom
    • Sexuality
  • Parenting Resources
    • LGBT Pride
    • Homosexuality
    • Sexuality/Marriage
    • Transgender
  • About
    • Contributors
    • Contact
  • Donate

social media

Feb 06 2026

Social Media Addiction Suits go to Trial — Here’s What You Need to Know

Social media companies face a legal reckoning this year as juries begin hearing the social media addiction lawsuits.

The long-awaited wave of cases will determine whether companies like Meta, YouTube, TikTok and their compatriots can face legal consequences for creating defective products.

The social media addiction lawsuits refer to thousands of civil cases alleging social media companies like Meta, YouTube and TikTok designed and released addictive products without adequate warnings, causing personal and financial injury.

The cases parallel the product liability lawsuits against tobacco companies, which eventually paid $206 billion for damage caused by addictive cigarettes.

The strongest cases are bundled into two groups — cases filed in federal court and cases filed in California state court. The federal cases primarily represent school districts and states which claim they foot the bill for young people’s social media addiction. The first of these cases will go to trial this summer.

The cases filed in California represent an estimated 1,600 individuals, families and school districts which claim social media addictions caused them personal injury.

The first of these cases — a 2023 suit filed on behalf of a young woman named KGM and several other plaintiffs — began trial proceedings on January 27.

KGM allegedly developed depression, anxiety and body-image issues after years of using YouTube, Instagram, TikTok (formerly Musical.ly) and Snapchat. She also became a victim of sextortion on Instagram, where a predator shared explicit photos of her as a minor.

The New York Post, which reviewed KGM’s complaint, describes her alleged experience with Meta following her victimization:

When KGM’s family reported her alleged sextortion to Meta … the company did nothing, [KGM’s] complaint claims, and instead allowed the person to continue committing harm via “explicit images of a minor child.”
According to the court filing, it took multiple family members and friends “spamming” Instagram’s moderation system in a coordinated, two-week effort before Meta handled it.

Predators met their victims on Instagram in nearly half (45%) of all the sextortion reports filed with the National Center for Missing and Exploited Children between August 2020 and August 2023.

Of the scammers who threatened to share explicit photos of minors, 60% threatened to do so on Instagram.

When Instagram launched new tools to address sextortion in October 2024 — long after KGM’s experience — the National Center on Sexual Exploitation (NCOSE) warned:

Mark Zuckerberg and Meta leadership have a very long track record of explicitly making the choice not to rectify harms it knows it is both causing and perpetuating on children … or only doing so when forced by public pressure, bad press or Congressional hearings.

KGM’s mom allegedly used third-party parental control apps to keep her daughter from using social media. It didn’t work. Per the complaint, Meta, YouTube, TikTok and Snap “design their products in a manner that enables children to evade parental consent.”

Social media “has changed the way [my daughter’s] brain works,” KGM’s mom wrote in a filing reviewed by the Los Angeles Times, continuing:

She has no long-term memory. She can’t live without a phone. She is willing to go to battle if you were even to touch her phone.

Snap, the company behind Snapchat, and TikTok settled with KGM in January. Meta and YouTube are expected to proceed with the trial.

KGM’s case is the first of nine California bellwether cases, or cases to determine whether a novel legal theory will hold up in court.

The mystery is whether KGM’s argument can circumvent Section 230 of the Communications Decency Act, which prevents online platforms from being sued for users’ posts.

Social media companies have thus far escaped product liability accusations using Section 230. In a November motion to keep KGM’s case from going before a jury, Meta blamed the harm KGM experience on content posted to Instagram, which it could not be held responsible for.

Judge Carolyn B. Kuhl, who oversees the state cases, ruled against the social media juggernaut, finding KGM had presented evidence indicating Instagram itself — not content posted to Instagram — caused her distress.

“Meta certainly may argue to the jury that KGM’s injuries were caused by content she viewed,” Judge Kuhl wrote. “But Plaintiff has presented evidence that features of Instagram that draw the user into compulsive viewing of content were a substantial factor in causing her harm.”

“The cause of KGM’s harms is a disputed factual question that must be resolved by the jury,” she concluded.

If KGM or any other bellwether case wins, Section 230 may no longer offer unconditional protection for social media companies which behave badly.

The social media addiction cases could send social media companies an expensive message about prioritizing child wellbeing. But parents shouldn’t rely on companies’ altruism (or grudging compliance with court orders) to protect their kids.

Social media is not a safe place for children. Parents should seriously consider keeping their children off it.

To read more of the Daily Citizen’s reporting on the effects of social media on children, read the articles linked below.

Additional Articles and Resources

America Controls TikTok Now—But Is It Really Safe?

X’s ‘Grok’ Generates Pornographic Images of Real People on Demand

Australia Bans Kids Under 16 Years Old From Social Media

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

TikTok Dangerous for Minors — Leaked Docs Show Company Refuses to Protect Kids

Instagram’s Sextortion Safety Measures — Too Little, Too Late?

Key Takeaways From Zuckerberg’s Tell-All

Zuckerberg Implicated in Meta’s Failures to Protect Children

Instagram Content Restrictions Don’t Work, Tests Show

Surgeon General Recommends Warning on Social Media Platforms

Horrifying Instagram Investigation Indicts Modern Parenting

Survey Finds Teens Use Social Media More Than Four Hours Per Day — Here’s What Parents Can Do

Child Safety Advocates Push Congress to Pass the Kids Online Safety Act

Many Parents Still Fail to Monitor Their Kids’ Online Activity, Survey Shows

Proposed ‘App Store Accountability’ Act Would Force Apps and App Stores to Uphold Basic Child Safety Protections

‘The Tech Exit’ Helps Families Ditch Addictive Tech — For Good

President Donald Trump, First Lady Sign ‘Take It Down’ Act

First Lady Supports Bill Targeting Deepfakes, Sextortion and Revenge Porn

Written by Emily Washburn · Categorized: Culture · Tagged: social media

Jan 27 2026

America Controls TikTok Now — But Is It Really Safe?

TikTok’s Chinese parent company, ByteDance, relinquished control of American TikTok to the U.S. late last week.

The transaction staves off the looming TikTok ban without sacrificing national security. The platform itself, however, remains as dangerous for America’s 170 million users as ever.

TikTok USDS Joint Venture, an American-controlled company, took over the American divisions of TikTok, Lemon8, CapCut and several other ByteDance apps on Thursday, January 22.

American investors own 50% of the new company. ByteDance retains 19.9% ownership and the remaining 30% belongs to ByteDance investors.

American control of the joint venture solves three national security threats caused by Chinese ownership of ByteDance.

First and foremost, American users’ data will be hosted on U.S.-based servers and protected by U.S. cybersecurity companies.

TikTok always claimed to keep the extensive data it collected on Americans secure. But Chinese law requires companies like ByteDance make their data available to the government.

A congressional investigatory committee determined Chinese officials had mined Americans’ TikTok data on multiple occasions, including:

  • Names
  • Ages
  • Emails
  • Phone numbers
  • Contact lists
  • In-app messages and usage patterns
  • IP addresses
  • Keystroke patterns
  • Browsing and search history
  • Location data
  • Biometric information like face- and voiceprints

Chinese access to ByteDance’s data extended to TikTok’s powerful content recommendation algorithm. TikTok USDS will reset the algorithm and retrain it on American content alone, ensuring China can no longer manipulate what U.S. citizens see on TikTok.

ByteDance will no longer perform content moderation under TikTok USDS, further preventing China from censoring or influencing the success of Americans’ posts.

ByteDance’s divestiture of American TikTok satisfies the Protecting Americans from Foreign Adversary Controlled Applications Act — the ban-or-sell law which required the company to sell majority ownership of TikTok for the app to remain available in America.

President Donald Trump passed an executive order delaying enforcement of the ban in January 2025, giving TikTok time to negotiate with American buyers. When ByteDance and U.S. investors established a framework for the deal in September, the president gave the parties another 120 days to sign on the dotted line.

The launch of TikTok USDS Joint Venture on January 22 came just one day before the deadline.

American control of TikTok might well protect citizens from global security threats. It does not, however, protect users from TikTok itself.

Thirteen states and the District of Columbia sued TikTok in October 2024 for illegally collecting and monetizing American children’s data.

The suits estimated as much as 35% of TikTok’s American ad revenue under ByteDance came from children and teens. Importantly, ByteDance will retain control of TikTok’s e-commerce, marketing and advertising under the TikTok USDS.

The 2024 lawsuits exposed documents showing TikTok not only knew its app was addictive, but that compulsive use in teens caused “a slew of negative mental health effects like loss of analytical skills, memory formation, contextual thinking, conversational depth, empathy, increased anxiety … [and interference] with essential responsibilities like sufficient sleep, work/school responsibilities and connecting with loved ones.”

An estimated 95% of American smartphone users under 17 years old use TikTok, according to one of the platform’s own reports.  

Still, TikTok workers did not consider it their responsibility to limit minors’ use of the platform — even when creating a tool allowing parents to set TikTok time limits for their kids.

“Our goal is not to reduce the time spent [on TikTok],” a project manager for the tool wrote in an employee group chat.

“[The goal is] to contribute to daily active users and retention [of other users],” another chimed in.

TikTok evaluated the success of the time limit tool based on one metric alone: “Whether it improved public trust in the TikTok platform via media coverage.”

TikTok doesn’t just fail to protect kids — it targets them. When the Apple App Store challenged TikTok’s 12-and-up age rating in 2022, arguing its “frequent or intense mature or suggestive content” warranted a 17-and-up rating, TikTok refused to change it.

The Daily Citizen supports policies which protect American families from foreign threats. But make no mistake — TikTok remains a dangerous place, particularly for young users.

Parents should think long and hard before allowing their kids to take part.

Additional Articles and Resources

Plugged In Parent’s Guide to Today’s Technology 

TikTok Dangerous for Minors — Leaked Docs Show Company Refuses to Protect Kids

Proposed ‘App Store Accountability’ Act Would Force Apps and App Stores to Uphold Basic Child Safety Protections

TikTok Scrambles Amid Looming Ban

Trump Revives TikTok

Supreme Court Upholds TikTok Ban

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

Tim Tebow to Parents: ‘Please Be the Protectors You’re Called to Be’

Written by Emily Washburn · Categorized: Culture · Tagged: social media, TikTok

Jan 16 2026

UPDATED: Pornography Age Verification Laws — What They Are and Which States Have Them

Half of all states — Louisiana, Arkansas, Virginia, Utah, Montana, Texas, North Carolina, Indiana, Idaho, Florida, Kentucky, Nebraska, Georgia, Alabama, Kansas, Oklahoma, Mississippi, South Carolina, South Dakota, Wyoming, North Dakota, Missouri, Arizona and Ohio — require pornography companies to verify the ages of their online consumers.

Ten more states hope to pass age verification legislation in 2026.

Described by Politico as “perhaps the most bipartisan laws in the country,” age verification laws help parents protect their kids by making it harder for minors to access adult content online.

Most age verification bills:

  • Require companies who publish a “substantial” amount of adult content — usually 1/3 or more of their total production — to check the age of every person accessing their website.
  • Create a way for parents to sue pornography companies if their kids access content they shouldn’t.

The Supreme Court found age verification requirements like these constitutional in June 2025, silencing critics who argue they infringe on free speech and privacy rights.

While most age verification laws contain the same basic components, few are identical.

Some states add age-verification requirements for social media companies. Minnesota’s House Filing 1875 would require social media companies to exclude children younger than 14 from their platforms.

Michigan’s Senate Bill 284 would require manufacturers like Apple to verify device users’ ages and communicate that information to other apps and websites.

Wyoming’s HB 43, now law, requires all online websites which publish or host adult content — no matter how little — to verify consumers’ ages.

States also employ different strategies to pass age verification bills.

Ohio rolled its age verification law into the bill establishing the state’s 2026-2027 budget. Missouri legislators introduced five bills this month to build on the state’s existing age verification regulations.

Hawaii separated its legislation into two bills — one establishing age verification requirements and another creating penalties for violators — so representatives could approve the requirements even if they disagreed with proposed penalties.

While not perfect, age verification laws greatly restrict the amount of porn young people can access. After Louisiana became the first state to pass such legislation in 2022, traffic to Pornhub.com from that state dropped by 80%, one spokesperson told the Institute for Family Studies.

Scroll down to see the status of age verification bills in different states. To find out more about age verification and parents’ rights legislation in your state, contact your local Focus on the Family-allied Family Policy Council.

States in dark blue have passed age verification laws. States in light blue have active age verification bills. Missouri has both passed and pending age verification legislation.
Age Verification Laws

Louisiana
HB 142 became law on June 15, 2022.
Date effective: January 1, 2023

Arkansas
SB 66 became law on April 11, 2023.
Date effective: July 31, 2023

Virginia
SB 1515 became law on May 12, 2023.
Date effective: July 1, 2023

Utah
SB 0287 became law on May 4, 2023.
Date effective: May 3, 2023

Montana
SB 544 became law on May 19, 2023.
Date effective: January 1, 2024

Texas
HB 1181 became law on June 12, 2023.
Date effective: September 19, 2023

North Carolina
HB 8 became law on September 29, 2023.
Date effective: January 1, 2024

Indiana
SB 17 became law on March 13, 2024.
Date effective: August 16, 2024

Idaho
HB 498 became law on March 21, 2024.
Date effective: July 1, 2024

Florida
HB 3 became law on March 25, 2024.
Date effective: January 1, 2025

Kentucky
HB 278 became law on April 5, 2024.
Date effective: July 15, 2024

Nebraska
Online Age Verification Liability Act became law on April 16, 2024.
Date effective: July 18, 2024

Georgia
SB 351 became law on April 23, 2024.
Date effective: July 1, 2025

Alabama
HB 164 became law on April 24, 2024.
Date effective: October 1, 2024

Kansas
SB 394 became law without the Governor’s signature on April 25, 2024.
Date effective: July 1, 2024

Oklahoma
SB 1959 became law on April 26, 2024.
Date effective: November 1, 2024

Mississippi
HB 1126 became law without the Governor’s signature on April 30, 2024.
Date effective: July 1, 2024

South Carolina
HB 3424 became law on May 29, 2024.
Date effective: January 1, 2025

Tennessee
HB 1642/SB 1792 became law on June 3, 2024.
Date effective: January 13, 2025

South Dakota
HB 1053 became law on February 27, 2025.
Date effective: July 1, 2025

Wyoming
HB 43 became law on March 13, 2025.
Date effective: July 1, 2025

North Dakota
HB 1561 became law on April 11, 2025.
Date effective: August 1, 2025

Missouri
Rule 15 CSR 60-17.010 published on May 7, 2025.
Date effective: November 30, 2025

Arizona
HB 2112 became law on May 13, 2025.
Date effective: September 26, 2025

Ohio
HB 96 became law on June 30, 2025.
Date effective: September 30, 2025

Age Verification Bills

Hawaii
HB 1212: carried over to the 2026 session on December 8, 2025.
HB 1198: carried over to the 2026 session on December 8, 2025.

Iowa
HF 864 (formerly HF 62): placed on subcommittee calendar for the Senate Committee on Technology on January 13.
SF 443 (formerly SF 236): referred to Senate Committee on Technology on June 16, 2025.

Michigan
SB 901: referred to Senate General Laws Committee on January 8.
SB 284 (HB 4429): referred to the Senate Committee on Finance, Insurance and Consumer Protection on May 6, 2025.
HB 4429 (SB 284) : referred to House Committee on Regulatory Reform on September 18, 2025.

Minnesota
HF 1875: referred to House Committee on Commerce, Finance and Policy on March 5, 2025.
SF 2105 (HF 1434): referred to Senate Committee on Commerce and Consumer Protection on March 3, 2025.
HF 1434 (SF 2105): referred to House Committee on Commerce, Finance and Policy on February 24, 2025.

Missouri
HB 1878: referred to House Committee on General Laws on January 8.
HB 1839: referred to House Committee on Children and Families on January 15.
SB 901: referred to Senate General Laws Committee on January 8.
SB 1346: read in the senate on January 7.
SB 1412: read in the senate on January 7.


New Hampshire
SB 648: heard by Senate Judiciary Committee on January 8.

New Jersey
S 1826: referred to Senate Judiciary Committee on January 13.

New York
S 3591 (A 03946): referred to Senate Committee on Internet and Technology on January 7.
A 03946 (S 3591): referred to Assembly Consumer Affairs and Protection Committee on January 7.

Pennsylvania
HB 1513: referred to House Communications and Technology Committee on May 29, 2025.
SB 603: referred to Senate Judiciary Committee on April 9, 2025.

Washington
HB 2112: heard in the House Committee on Consumer Protection and Business on January 16.

Wisconsin
AB 105: second amendment proposed in the senate on January 7.

Written by Emily Washburn · Categorized: Culture, How to Get Involved · Tagged: age verification, parental rights, social media

Jan 15 2026

Our Christian Worldview is About More Than Our Apologetics

Cultural flashpoints seem to be occurring with increasing frequency, undoubtedly facilitated and exacerbated by the explosion of social media. Situations and issues are quickly magnified and exploited — and more and more people seem to have opinions about all kinds of things that once upon a time would have been well off their radar.

Although arguable, social media’s overall downsides do have their upsides. Included on this small list would be the way the medium reveals an individual’s worldview, for good or bad.

It was Mark Twain who said, “It is better to keep your mouth shut and let people think you are a fool than to open it and remove all doubt.” This philosophy is ignored by too many people out in cyberspace, many of whom weigh in on issues well beyond their expertise. They epitomize King Solomon’s stinging observation:

“A fool’s lips walk into a fight, and his mouth invites a beating” (Proverbs 18:6).

One man who is the antithesis of this is Dr. Del Tackett, a former senior vice president here at Focus on the Family and the creator and host of “The Truth Project” — Focus on the Family’s award-winning Christian worldview curriculum. Many of us here at Focus had the privilege of sitting under Del’s teaching before the lessons were put on film. 

At the heart of Dr. Tackett’s classes in this area is the conviction that Christians should be looking at every aspect of the world through the truth of Scripture.

“A worldview is a set of truth claims that purports to paint a picture of reality,” states Del. Biblical truth claims are unchanging and absolute while cultural claims more often vacillate with the changing times. In essence, our worldview is the lens through which we see reality.

One aspect of our worldview that seems to be overlooked involves how we tend to find what we’re looking for in this world. In Matthew’s gospel, Jesus declares, “Ask, and it will be given to you; seek, and you will find; knock, and it will be opened to you” (7:7). 

If we’re seeking the negative and the hypocritical, it’s a given that in a fallen world, we’ll find it — and even among Christians. Conversely, if we’re keeping our eyes open for the pure and the true, we’ll find that, too. 

It’s always been a curious thing when Christians seem to relish pointing out what they perceive to be pharisaical behavior among other believers. “Iron sharpens iron” (Proverbs 27:17), but it is a destructive practice to gleefully run down another Christian. Correction, if necessary, should be handled carefully and soberly.

Yet another concerning dimension of a certain worldview approach are those who assume the worst of people with whom they disagree. Christian couples often have the Apostle Paul’s words to believers in Corinth read at their wedding: “Love bears all things, believes all things, hopes all things, endures all things” (1 Cor. 13:7), but the sentiment is not just limited to romantic love.

Do we have a generosity of spirit when evaluating others? Simply because we don’t like someone’s politics or policies, must we nitpick and find fault? Henry Wadsworth Longfellow astutely observed, “We judge ourselves by what we feel capable of doing, while others judge us by what we have already done.” 

Scripture provides lots of evidence and encouragement for optimism (Jeremiah 29:11, Romans 15:13) and passages that will resonate with a more pessimistic posture (Psalms, Lamentations). Yet, ideally, a balanced worldview appreciates that both are realities to grapple with and work through on a daily basis. 

Dr. Tackett correctly suggested, “The battle we are in today is not primarily political or social — it is a battle of worldviews.” That struggle is not just existential but also very personal and requires us to examine our own hearts as we navigate the complexities of culture.

Written by Paul Batura · Categorized: Culture · Tagged: social media

Jan 08 2026

X’s ‘Grok’ Generates Pornographic Images of Real People on Demand

A damaging new editing feature allows people on X (formerly Twitter) to generate sexually explicit images and videos of real people using the platform’s built-in AI chatbot, Grok.

“Grok Imagine,” which the bot’s parent company, xAI, rolled out in late November, enables Grok to manipulate photos and videos. Users can request Grok alter photos and videos posted to X in the post’s comment section.

xAI owner Elon Musk promoted “Grok Imagine” on Christmas Eve. The platform subsequently flooded with fake images of real people stripped naked or performing simulated sex acts. On at least two occasions, Grok produced sexual photos of children.

Samantha Smith was one of the first women victimized by “Grok Imagine.” The devoted Catholic described her experience in a piece for the Catholic Herald:

My clothes were digitally removed. My face was plastered into sexual situations I had no control over and no desire to be involved in. I remember looking at it and feeling exposed in a way that was difficult to explain to anyone who had not experienced it.

“It did not matter that the image was fake,” Smith emphasized. “The sense of violation was real.”

The disastrous fallout of “Grok Imagine” is a predictable consequence of Grok’s design.

xAI spent much of last year training Grok to perform some sexual functions by feeding it explicit internet content. The company introduced female Grok avatars capable of undressing, trained Grok to hold sexually explicit conversations with users, and even allowed the bot to generate some pornographic images.

Grok is one of the only mainstream AI chatbots designed to perform sexual functions, because it’s infinitely easier to train a chatbot to avoid all sexual requests than to teach it which requests are illegal.

When xAI started feeding Grok pornographic internet content, it inevitably exposed the bot to illegal content like child sexual abuse material (CSAM).

By September 2025, Grok had already generated sexual images of children.

“This was an entirely predictable and avoidable atrocity,” Dani Pinter, Chief Legal Officer and Director of the Law Center at the National Center on Sexual Exploitation wrote in a press release.

“Had X rigorously culled [CSAM and other abusive content] from its training models and then banned users requesting illegal content, this would not have happened.”

The “Grok Imagine” debacle exposes America’s lack of AI regulation.

Sharing explicit, AI deepfakes is illegal under the Take it Down Act, which penalizes sharing explicit, AI-generated images of adults with up to two years in prison. Those who share explicit images of children face up to three years in jail.

The mass implementation of “Grok Imagine” on X dramatically — and rapidly — increased violations of the Take It Down Act, making it impossible for the FBI to identify and prosecute every perpetrator.

Further, no legislation or court precedent holds AI parent companies legally liable for building defective chatbots. Companies like xAI have no incentive to conduct robust safety testing or implement consumer protection protocols.  

“X’s actions are just another example of why we need safeguards for AI products,” Pinter argues. “Big Tech cannot be trusted to curb serious child exploitation issues it knows about within its own products.”

Grok’s latest shenanigans illustrate why children and teens should not use AI chatbots — particularly without adult supervision. “Grok Imagine” also makes X more unsafe for children, who could easily stumble on one of the thousands of deepfakes plaguing the platform.

Widespread pornographic deepfakes could soon infect other social media platforms. The National Center for Missing and Exploited Children (NCMEC) fielded 67,000 reports of AI-generated CSAM in 2024 — more than 14 times as many as in 2023.  

NCMEC received more than 440,000 reports of AI-generated CSAM in the first half of 2025 alone.

Parents should seriously consider the exploding prevalence of AI-generated pornography before allowing their child to use any social media platform.

Parents should carefully consider sharing their own photos online. In the age of AI, it only takes one bad actor to turn a sweet family photo into something sinister and damaging.

Additional Articles and Resources

Counseling Consultation & Referrals

Parenting Tips for Guiding Your Kids in the Digital Age

You Don’t Need ChatGPT to Raise a Child. You Need a Mom and Dad.

AI Company Releases Sexually Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

Seven New Lawsuits Against ChatGPT Parent Company Highlights Disturbing Trends

ChatGPT Parent Company Allegedly Dismantled Safety Protocols Before Teen’s Death

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

ChatGPT ‘Coached’ 16-Yr-Old Boy to Commit Suicide, Parents Allege

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

AI “Bad Science” Videos Promote Conspiracy Theories for Kids – And More

Does Social Media AI Know Your Teens Better Than You Do? AI is the Thief of Potential — A College Student’s Perspective

Written by Emily Washburn · Categorized: Culture · Tagged: AI, social media

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 5
  • Go to Next Page »

Privacy Policy and Terms of Use | Privacy Policy and Terms of Use | © 2026 Focus on the Family. All rights reserved.

  • Cookie Policy