• Skip to main content
Daily Citizen
  • Subscribe
  • Categories
    • Culture
    • Life
    • Religious Freedom
    • Sexuality
  • Parenting Resources
    • LGBT Pride
    • Homosexuality
    • Sexuality/Marriage
    • Transgender
  • About
    • Contributors
    • Contact
  • Donate

social media

Apr 08 2026

Feds Convict First Person for Crimes Under ‘Take It Down’ Act

Federal prosecutors secured their first conviction under the Take It Down Act yesterday after an Ohio man pled guilty to crimes including distributing sexually explicit AI deepfakes of women and children.

“We will not tolerate the abhorrent practice of posting and publicizing AI-generated intimate images of real individuals without consent,” U.S. Attorney for the Southern District of Ohio Dominick S. Gerace II wrote in a press release following James Strahler II’s guilty plea.

“We are committed to using every tool at our disposal to hold accountable offenders like Strahler, who seek to intimidate and harass others by creating and circulating this disturbing content.”

According to the U.S. attorney’s office, Strahler used AI to terrorize six women and their families between December 2024 and June 2025, when he was arrested. His “campaign of harassment” included sharing sexually explicit photos and videos of his victims — both real and AI-generated — and threatening to sexually assault them.

Strahler also threatened his victims’ mothers. The attorney’s office writes:

[Strahler] messaged the mothers of the adult female [victims] and demanded nude photos of them, threatening to circulate explicit or obscene images he created of their daughters if they did not comply.

The Take It Down Act, which both President Donald Trump and First Lady Melania Trump signed into law in May 2025, criminalizes the three behaviors Strahler used to harm his victims:

  • Sharing private, sexually explicit images and videos without permission.
  • Sharing sexually explicit digital forgeries, or deepfakes, of real people.
  • Sextortion, or threatening to share sexually explicit images and videos.

The Take It Down Act punishes sharing real and AI-generated explicit content with the same penalties, because the consequences of distributing sexually explicit images and videos of a real person don’t diminish when the content is fake.

Strahler faces up to two years in prison for each time he shared intimate content of his victims, and up to 18 months for each time he threatened to do so.

Strahler’s crimes aren’t limited to adults. He also used AI technology to create sexually explicit deepfake photos and videos of children, hundreds of which he posted to a website dedicated to distributing child sexual abuse material (CSAM).

Investigators found hundreds more images and videos containing “morphed CSAM” on his phone.

The Take It Down Act levies harsher penalties against offenders who exploit and sextort children. Strahler can face up to three years in prison for each piece of CSAM he distributed.

“Today marks the first conviction under the Take It Down Act — protecting victims from non-consensual AI-generated sexually explicit images, cyberstalking and threats of violence,” First Lady Melania Trump, who helped shepherd the bill through Congress, posted to X.

“Thank you U.S. Attorney Dominick S. Gerace II for protecting Americans from cybercrimes in this new digital age.”

TAKE IT DOWN ACT: FIRST CONVICTION

Today marks the first conviction under the Take It Down Act – protecting victims from non-consensual AI-generated sexually explicit images, cyberstalking, and threats of violence.

Thank you U.S. Attorney Dominick S. Gerace II for protecting…

— First Lady Melania Trump (@FLOTUS) April 7, 2026

The Daily Citizen thanks federal prosecutors in Ohio for putting the Take It Down Act to work in service of kids and victims of AI-based sexual abuse. This case creates a blueprint for other districts to begin leveraging powerful legislation against predators.

But the Take It Down Act can’t succeed in isolation, either. Incidents of AI-driven sextortion and leaked deepfakes, in particular, will continue growing beyond law enforcement’s capacity until America regulates how AI companies create and safety test their chatbots.

We live in the digital wild west. Enforcing Take It Down Act is just one step toward civilization.

Additional Articles and Resources:

Zuckerberg, Grok, Messaging Platforms Dominate 2026 Dirty Dozen List

X’s ‘Grok’ Generates Pornographic Images of Real People on Demand

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

First Lady Melania Trump Celebrates House’s Passage of Take it Down Act

First Lady Melania Trump Celebrates Committee passage of Bill Targeting Revenge Porn, Sextortion and Explicit Deepfakes

First Lady Supports Bill Targeting Deepfakes, Sextortion and Revenge Porn

Teen Boys Falling Prey to Financial Sextortion — Here’s What Parents Can Do

Meta Takes Steps to Prevent Kids From Sexting

Instagram’s Sextortion Safety Measures — Too Little, Too Late?

‘The Dirty Dozen List’ — Corporations Enable and Profit from Sexual Exploitation

Taylor Swift Deepfakes Should Inspire Outrage — But X Isn’t to Blame

Written by Emily Washburn · Categorized: Culture · Tagged: AI, deepfakes, social media, Take It Down Act

Mar 25 2026

Juries in California, New Mexico Rule Against Meta

Juries in California and New Mexico dealt Meta two costly legal defeats this week, reflecting Americans’ mounting frustration with social media companies’ unwillingness to protect children on their platforms.  

KGM v. Meta

A Los Angeles jury found Meta and YouTube’s addictive social media platforms caused a young woman to experience sextortion, depression, anxiety and body image issues in KGM v. Meta today.

Until now, companies like Meta have blamed harms associated with social media on harmful content, rather than the design of social media platforms themselves. Under Section 230 of the Communications Decency Act, social media companies cannot be held liable for the content people post on their sites.

As of today, they can be held liable for designing addictive products.

“Today’s landmark verdict isn’t just a financial win for the plaintiff,” says President and CEO of Focus on the Family Jim Daly. “It’s an acknowledgment that Big Tech cannot willfully, recklessly and irresponsibly poison young hearts and minds in order to generate a profit.”

Meta and YouTube must pay the plaintiff and her family $2.1 and $0.9 million in damages, respectively, with additional punitive fines to follow.

A spokesperson for YouTube says it plans to appeal the ruling. A spokesperson for Meta says the company is reviewing its legal options.

New Mexico v. Meta

Yesterday, a New Mexico jury found Meta violated state consumer protection laws by endangering children and “misleading consumers” about the safety of its platforms.

The jury required Meta to pay $5,000 — the maximum penalty possible — for every violation of the law, totaling $375 million.

“New Mexico is proud to be the first state to hold Meta accountable in court for misleading parents, enabling child exploitation and harming kids,” New Mexico Attorney General Raúl Torrez wrote in a press release.

“Today, the jury joined families, educators and child safety experts in saying enough is enough.”

A spokesperson for Meta told Fox Business it plans to appeal the ruling.

New Mexico v. Meta, which New Mexico filed against Meta in 2023, featured evidence from an undercover operation in which law enforcement agents posed as children on Instagram and Facebook.

The investigation indicated Meta’s platforms:

  • Show underage users sexually explicit content without prompting.
  • Allow adult predators to contact children and sexually exploit them.
  • Facilitate the spread and exchange of child pornography.
What’s Next?

Though juries have delivered verdicts in both cases, proceedings in neither case are over.

In KGM v. Meta, the jury must deliberate over how much Meta and YouTube should pay in punitive damages. While compensatory damages are assigned to compensate an injured party, punitive damages are levied to punish offenders and deter further poor conduct.

The companies face punitive damages because the jury determined their actions meet the legal standard for “malice,” which includes highly egregious conduct.

In New Mexico v. Meta, the judge must rule on the state’s remaining claim: that the design and operation of Meta’s platforms are a public nuisance which must be remedied.

If Judge Bryan Biedscheid rules in the state’s favor, he could force Meta to make changes to its platforms, including “enacting effective age verification, removing predators … and protecting minors from encrypted communications that shield bad actors.”

Looking Ahead

The precedent set in KGM and New Mexico could dramatically increase future plaintiffs’ likelihood of bringing successful cases against neglectful social media companies.

KGM is the first of nine bellwether cases representing a group of more than 1,600 similar social media addiction cases filed in California state court. The jury’s decision today proves social media addiction cases can win in front of a jury.

New Mexico won its case against Meta in state court. Several other states hope to triumph against social media companies in federal court this summer.

The dozens of plaintiffs in this group of cases — primarily states and school districts — allege they pay the cost for citizens’ social media addiction. KGM helps establish exactly what the social and practical costs of social media addiction can include.

The Daily Citizen is grateful for the juries’ careful deliberation in these cases and their choice to hold companies accountable for their abusive business decisions.

The Daily Citizen will continue covering the social media addiction cases and America’s ongoing reckoning with social media.

Additional Articles and Resources

New Mexico Accuses Meta of Egregious Harm to Children in Court Case

Social Media Addiction Suits got to Trial — Here’s What You Need to Know

Meta’s Mark Zuckerberg Denies Instagram is Addictive in Social Media Trial Testimony

Australia Bans Kids Under 16 Years Old From Social Media

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

Instagram’s Sextortion Safety Measures — Too Little, Too Late?

Key Takeaways From Zuckerberg’s Tell-All

Zuckerberg Implicated in Meta’s Failures to Protect Children

Instagram Content Restrictions Don’t Work, Tests Show

Surgeon General Recommends Warning on Social Media Platforms

Horrifying Instagram Investigation Indicts Modern Parenting

Written by Emily Washburn · Categorized: Culture · Tagged: social media

Mar 06 2026

New Mexico Accuses Meta of Egregious Harm to Children in Court Case

Trial proceedings in New Mexico’s court case against Meta will conclude later this month, with a judge to decide whether the beleaguered social media company violated state law by exposing minors to explicit content, social media addiction and sexual exploitation.

The state’s 228-page complaint, which New Mexico Attorney General Raul Torrez filed in 2023, alleges Meta’s platforms target minors with addictive features and “knowingly expose them to the twin dangers of sexual exploitation and mental health harm.”

If First Judicial District Court Judge Bryan Biedscheid rules in New Mexico’s favor, Meta could face hundreds of millions of dollars in fines for violating state law protecting consumers from unfair and deceptive business practices.

The trial is expected to conclude on March 27.

New Mexico’s case is the first stand-alone state suit against Meta. It includes evidence from a months-long undercover operation in which New Mexico officers posed as children on Facebook and Instagram.

The investigation indicated Meta’s platforms:

  • Show underage users sexually explicit content without prompting.
  • Allow adult predators to contact children and sexually exploit them.
  • Facilitate the spread and exchange of child pornography.

In some cases, the state claims, Facebook recommended children join groups “devoted to facilitating commercial sex.”

In another case, investigators say Meta “allowed a fictitious mother offer her 13-year-old daughter for sale to sex traffickers and to create a professional page to allow her daughter to share revenue from advertising.”

“Our investigation into Meta’s social media platforms demonstrates that they are not safe spaces for children but rather prime locations for predators to trade child pornography and solicit minors for sex,” Attorney General Torrez summarized in a press release.

Internal documents from Meta and testimony from former employees bolster New Mexico’s claims. Shortly before the trial began on February 9, the state published several troubling communications from a former Meta employee who worked in child safety.

In a June 2020 email reviewed by the New York Post, the employee revealed sexual predators on Meta platforms target “[approximately] 500k victims per day in English markets only.”

“We expect the true situation is worse,” she confided.

In another email, the employee considered the impact of giving such a large user base access to children.

“I just think nowhere in the history of humanity could you have a secret conversation with 1,000 people,” she wrote. “I’m actually scared of the ramifications here.”

Two-time Meta employee Arturo Béjar testified against his former employer on February 12. Béjar left Meta for the first time in 2015. He returned in 2019 to strengthen Meta platforms’ safety after someone sent his own daughter explicit photos online.

As far as Béjar could tell, Meta wasn’t interested in prioritizing safety.

“So many examples of people with good ideas for good things that would reduce harm within, as it got reviewed and went through the pipeline, would get pushed down,” KOAT quoted Béjar’s testimony.

In 2021, Béjar surveyed more than 237,000 Instagram users between 13 and 15 years old to determine what kinds of harm they faced on social media. One in three reported witnessing cyberbullying. One in 10 said they, themselves, experienced bullying online. One in five reported seeing explicit images.

Though Béjar said he shared his findings with Meta CEO Mark Zuckerberg and other top executives, he claimed the company continued to prioritize profit.

“I think they [the executives] really care about making people think that they care, but I think in practice they don’t care,” Béjar mused, per KOAT.

“Caring is the moment you become aware of something, you engage with it, you understand it, you work on it, you do things that make it better.”

Meta, for its part, maintains New Mexico’s undercover investigation was “ethically compromised.” In opening arguments, the company claimed the state “cherry-picked” evidence which doesn’t accurately reflect its safety protocols.

Judge Biedscheid denied Meta’s request to dismiss New Mexico’s case in May 2024. He subsequently denied its pretrial motion to exclude evidence from the state’s undercover investigation.

Meta’s biggest problem is that New Mexico’s allegations echo those from thousands of other lawsuits against the social media company. More than 1,600 civil cases accusing Meta and other social media platforms of harming children — known as the social media addiction lawsuits — are awaiting trial in California state court.

Trial for the first of these cases began in Los Angeles in January.

Dozens more federal cases, including many brought against Meta by state governments, will make their way into courtrooms starting this summer.

Instagram, in particular, has long been linked to sextortion and inappropriate content. Predators met their victims on Instagram in nearly half (45%) of all the sextortion reports filed with the National Center for Missing and Exploited Children between August 2020 and August 2023.

Of the scammers who threated to share explicit photos of minors, 60% threatened to do so on Instagram.

In January 2024, Meta announced all teen accounts would begin automatically filtering out inappropriate content. Journalists from The Wall Street Journal soon discovered the feature didn’t work.

“Instagram regularly recommends sexual videos to accounts for teenagers that appear interested in racy content, and does so within minutes of when they first log in,” the outlet wrote, continuing:

Within a half-hour of its creation, a new 13-year-old test account that watched only Instagram-recommended videos featuring women began being served video after video about anal sex.

Regardless of whether New Mexico triumphs against Meta in court, evidence from the state’s case clearly illustrates why minors should not be allowed on Instagram and Facebook unsupervised.

The Daily Citizen will continue covering America’s legal reckoning with social media and the harm it causes children.

Additional Articles and Resources

Social Media Addiction Suits got to Trial — Here’s What You Need to Know

Meta’s Mark Zuckerberg Denies Instagram is Addictive in Social Media Trial Testimony

Australia Bans Kids Under 16 Years Old From Social Media

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

Instagram’s Sextortion Safety Measures — Too Little, Too Late?

Key Takeaways From Zuckerberg’s Tell-All

Zuckerberg Implicated in Meta’s Failures to Protect Children

Instagram Content Restrictions Don’t Work, Tests Show

Surgeon General Recommends Warning on Social Media Platforms

Horrifying Instagram Investigation Indicts Modern Parenting

Written by Emily Washburn · Categorized: Culture · Tagged: social media

Feb 26 2026

Deceptive Colorado Bill Hides Kids’ Social Media Activity from Parents

Colorado’s deceptive “Protections for Youth on Social Media” bill would allow minors to hide their online activity from their parents.

HB26-1148, which Representatives Yara Zokaie and Jenny Willford introduced in the Colorado House of Representatives earlier this month, would require social media and online gaming platforms to adopt high default privacy and data protection settings for minors’ accounts.

Plenty of other child protection bills, including the federal Kids Online Safety Act (KOSA), would require social media companies to institute more protections for minors on their platforms.

Unlike other bills, however, Colorado’s “Protections for Youth on Social Media” bill would conceal kids’ data from parents and strangers alike.

HB26-1148 would require covered companies to obtain a minor user’s permission for any adult — including their parents — to view their profile, account activity, friends or location.

To protect minors’ “data privacy,” HB26-1148 instructs:

[Covered businesses] shall not permit an individual, including a parent or guardian of a covered minor, to track the location of the covered minor without providing a conspicuous signal to the covered minor when the covered minor is being monitored or tracked (emphasis added).

To add insult to injury, the bill does not require social media or online gaming platforms to meaningfully protect children from obscene or inappropriate content. Covered companies do not have to verify the ages of users seeking to access mature content.

Instead, HB26-1148 specifies it should not be “interpreted or construed to … prevent or preclude a minor from deliberately or independently searching for or specifically requesting any media.”

In other words, HB26-1148 strips parents of any ability to monitor their kids’ social media activity while empowering minors to view any content and interact with any user they want.

Readers would never know the bill eviscerates parents ability to protect their kids online from its official summary, which writes only that HB26-1148 would require covered companies to provide minors with “the highest level of privacy.”

Parents familiar with bills like KOSA, which would require social media companies to adopt comprehensive parental controls, might reasonably assume Colorado’s bill would exempt exempts parents from privacy protections meant to shield children’s accounts from strangers and internet predators.

Unfortunately, they would be wrong.

Colorado’s “Protections for Youth on Social Media” may not directly affect all families, but Colorado legislators’ proposal and deceptive description of the bill should remind all parents to read the fine print.

Families simply cannot afford to take legislators’ word that a proposed bill is pro-family or pro-child. That’s why the Daily Citizen offers timely analysis and resources precisely to help busy families stay abreast of important policy issues.

For more information about legislative happenings in your state — and how you can get involved —connect with your Focus on the Family-allied state policy group. 

Additional Articles and Resources

How to Get In Touch With Your State Policy Group

Counseling Consultation & Referrals

Introducing Our Parents’ Guide to Technology 2026

Parenting Tips for Guiding Your Kids in the Digital Age

Child Safety Advocates Push Congress to Pass the Kids Online Safety Act

‘The Tech Exit’ Helps Families Ditch Addictive Tech — For Good

Parent-Run Groups Help Stop Childhood Smartphone Use

The Harmful Effects of Screen-Filled Culture on Kids

Social Psychologist Finds Smartphones and Social Media Harm Kids in These Four Ways

Four Ways to Protect Your Kids from Bad Tech, From Social Psychologist Jonathan Haidt

Australia Bans Kids Under 16 Years Old From Social Media

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

Written by Emily Washburn · Categorized: Culture · Tagged: social media

Feb 20 2026

Meta’s Mark Zuckerberg Denies Instagram is Addictive in Social Media Trial Testimony

Meta’s Mark Zuckerberg denied social media is addictive Wednesday in his testimony at America’s first social media addiction trial.

The highly anticipated case will determine whether social media companies like Meta can be held legally liable for creating addictive products.

The complaint contends Instagram, a social media app owned by Meta, contributed to the social media addiction of a young woman named Kayley (initials KGM). Kayley’s alleged addiction exposed her to child predators and caused her to develop depression, anxiety and body dysmorphia.

The question before the jury is whether Instagram and YouTube, the defendants, are addictive.  

Kayley and her lawyers argue the platforms themselves cause addiction with features like infinite scroll, content recommendation algorithms and beauty filters.

Meta and Zuckerberg say the content Kayley saw on Instagram, in addition to her preexisting trauma and mental health conditions, caused her distress — not Instagram’s design.

Importantly, Section 230 of the Communications Decency Act protects online content hosts, like Instagram, from being held liable for content users post to their sites.

Zuckerberg bobbed and weaved with characteristic agility during his testimony Wednesday, staunchly refusing any suggestion Instagram could be addictive.

“I’m not sure what to say to that,” Zuckerberg told Mark Lanier, Kayley’s lawyer, after Lanier asked whether addictive products increase usage. “I don’t think that applies here.”

Zuckerberg focused on convincing the jury Meta does not benefit from addicting its users.

“There’s a misconception that the more attention the company captures, and the more time people spend on its apps, the better it is for Meta’s bottom line, regardless of the harms they may encounter,” NPR paraphrased the Meta founder.

He continued:

If people feel like they’re not having a good experience, why would they keep using the product?

Here, Zuckerberg assumes users can leave Instagram at will. But the crux of the case against Meta is that Instagram is addictive, making it difficult or impossible for dependent users to quit without help.

Kayley’s mom fears social media “has changed the way [her daughter’s] brain works.”

“She has no longer term memory,” she wrote in a filing reviewed by the Los Angeles Times.

“She can’t live without a phone. She is willing to go to battle if you were even to touch her phone.”

Meta makes money from monopolizing users’ time — which means the company has financial incentive to make their products addictive.

Internal documents referenced by Lanier show Instagram and Zuckerberg responded to this incentive on several occasions.

In 2016, when Instagram was competing with Snapchat for teen users, Zuckerberg reportedly “directed executives to focus on getting teenagers to spend more time on the company’s platforms,” according to the The New York Times.

Another document from 2018 reads: “If we wanna win big with teens, we must bring them in as tweens.” Instagram’s user policy supposedly excludes children under 13 years old.

Zuckerberg argued this incentive structure no longer exists. Now, Instagram focuses on making the platform useful to users.

“If something is valuable, people will use it more because it’s useful to them,” he explained.

But therein lies the problem. Though Instagram may no longer explicitly focus on increasing usage, Zuckerberg’s testimony indicates the amount of time users spend on Instagram remains the platform’s ultimate metric of success.  

In other words, Meta still has incentive to make their products addictive — whether Zuckerberg admits it or not.

Meta’s incentive structure speaks to the company’s motivations. If Zuckerberg can help convince the jury Meta does not benefit from addicting its users, then the social media company can claim it has no reason to create harmful products.

The argument is a subtle one, particularly compared to the ever-growing mountain of evidence showing social media harms children.

Meta’s history of releasing faulty or unsubstantial child safety protections further undermines Zuckerberg’s claim Meta does not increase usage at any cost.

When Instagram launched new tools addressing sextortion in October 2024, the National Center on Sexual Exploitation (NCOSE) warned:

Mark Zuckerberg and Meta leadership have a very long track record of explicitly making the choice not to rectify harms it knows it is both causing and perpetuating on children … or only doing so when forced by public pressure, bad press or Congressional hearings.

Senator Marsha Blackburn (TN), the author of the bi-partisan Kids Online Safety Act, communicated a similar sentiment in her remarks on Zuckerberg’s testimony.

“To no one’s surprise, Mark Zuckerberg followed his usual playbook of denial and deceit while sitting just a few steps away from parents who have tragically lost their children as a consequence of the way his platforms are designed to harm young users,” Blackburn said, continuing:

These companies are using the same playbook as Big Tobacco did decades ago by trying to keep kids hooked on products that hurt them.

Kayley’s case is the first of nine bellwethers, or test cases, for an estimated 1,600 similar civil cases filed against social media companies in California state court. Trial proceedings for the first of a large group of federal social media addiction cases will begin this summer.

Additional Articles and Resources

Social Media Addiction Suits go to Trial — Here’s What You Need to Know

America Controls TikTok Now—But Is It Really Safe?

X’s ‘Grok’ Generates Pornographic Images of Real People on Demand

Australia Bans Kids Under 16 Years Old From Social Media

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

TikTok Dangerous for Minors — Leaked Docs Show Company Refuses to Protect Kids

Instagram’s Sextortion Safety Measures — Too Little, Too Late?

Key Takeaways From Zuckerberg’s Tell-All

Zuckerberg Implicated in Meta’s Failures to Protect Children

Instagram Content Restrictions Don’t Work, Tests Show

Surgeon General Recommends Warning on Social Media Platforms

Horrifying Instagram Investigation Indicts Modern Parenting

Survey Finds Teens Use Social Media More Than Four Hours Per Day — Here’s What Parents Can Do

Child Safety Advocates Push Congress to Pass the Kids Online Safety Act

Many Parents Still Fail to Monitor Their Kids’ Online Activity, Survey Shows

Proposed ‘App Store Accountability’ Act Would Force Apps and App Stores to Uphold Basic Child Safety Protections

‘The Tech Exit’ Helps Families Ditch Addictive Tech — For Good

President Donald Trump, First Lady Sign ‘Take It Down’ Act

First Lady Supports Bill Targeting Deepfakes, Sextortion and Revenge Porn

Written by Emily Washburn · Categorized: Culture · Tagged: social media

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 6
  • Go to Next Page »

Privacy Policy and Terms of Use | Privacy Policy and Terms of Use | © 2026 Focus on the Family. All rights reserved.

  • Cookie Policy