• Skip to main content
Daily Citizen
  • Subscribe
  • Categories
    • Culture
    • Life
    • Religious Freedom
    • Sexuality
  • Parenting Resources
    • LGBT Pride
    • Homosexuality
    • Sexuality/Marriage
    • Transgender
  • About
    • Contributors
    • Contact
  • Donate

NCOSE

Apr 03 2026

Zuckerberg, Grok, Messaging Platforms Dominate 2026 Dirty Dozen List

The National Center on Sexual Exploitation (NCOSE) released their annual Dirty Dozen List this week, naming 12 well-known companies which facilitate, enable or profit from the sexual exploitation and abuse of children.

This year’s list features the campaign’s first individual — Mark Zuckerberg.

“As the CEO of Meta and a major shareholder in the company, [Mark Zuckerberg] continues to create an enabling environment for sexual abuse and sexual exploitation to flourish,” NCOSE President and CEO Marcel van der Watt explained the social media mogul’s inclusion Tuesday, during an event unveiling the list.

Haley McNamara, NCOSE’s executive director and chief strategy officer continued:

At the end of the day, responsibility for [Meta’s] harms lands on Mark Zuckerberg’s desk for its consistent deprioritization of child safety.

McNamara specifically noted:

  • The recent roll out of Meta’s AI chatbot, which included design features allowing it to engage in sexual conversations with minors.
  • Meta’s chronically ineffective teen safety tools.
  • A previous Instagram policy requiring an account be flagged for sex trafficking 17 times before it be removed.

Van der Watt and McNamara say NCOSE will no longer engage Meta in conversations about making their platforms safer.

“We don’t think they care enough about protecting children to offer them recommendations,” McNamara explained.

Instead, NCOSE reiterated its call to end Section 230 of the Communication Decency Act, which effectively immunizes Meta and other social media companies from legal liability for content posted to their platforms.

Until recently, Meta and its compatriots successfully blamed any harm children suffered from social media on content they viewed, rather the design of the platforms themselves.

Last week, juries in New Mexico and California decided social media companies can be held liable for creating addictive, defective platforms. Despite these hopeful legal rulings, Section 230 continues to provide substantial legal cover to online content hosts which expose children to harmful content.

In addition to Zuckerberg, NCOSE’s 2026 Dirty Dozen list includes:

Grok

xAI’s Grok is the Dirty Dozen campaign’s first large language learning model. Its chatbot “companions” can conduct sexual conversations and its image generation feature can create sexual images.

Predictably, these features can — and do — go horribly awry. When McNamara experimented with Grok’s adult companions, she found the characters willing to “engage in, normalize and promote sexual fantasies around themes of rape, sexual violence, sex trafficking and even, over the course of a conversation with multiple prompts, child sexual abuse.”

Even Good Rudi, a Grok chatbot marketed for children, would engage in graphic descriptions of sexual encounters if prompted, NCOSE researchers found.

Grok’s image generator can create fake images of real people, including, in sexual situations. Both types of images are illegal under the Take It Down Act, which President Donald Trump signed into law in May 2025.

X

X, formerly Twitter, is one of the Dirty Dozen’s most consistent call outs, appearing on every list from 2017 to 2023.

NCOSE named X again this year for its indiscriminate use of Section 230 to shield sexually explicit and abusive content, including AI deepfakes and revenge porn.

“Its policies and lack of enforcement make X a safe harbor for abusers and a nightmare for survivors,” NCOSE writes, providing evidence showing the company consistently fails to promptly remove explicit photos of underage sextortion victims.

Steam

Steam is an online video game marketplace where kids can download anything from ultra popular franchises like Halo to thousands of independent games.

Steam includes access to sexually explicit, fetishistic and violent games. NCOSE found the platforms parental controls do not adequately shield kids from this harmful content.

“Their safety controls are not strict enough to prevent children from accessing certain pieces of material that, in my opinion, they have no business accessing whatsoever,” Nicolas Moy, a cybersecurity expert and NCOSE researcher, explained during the Dirty Dozen launch event.

“It got me to the point where I said to my son, ‘We’re not going to have this anymore.’”

Steam’s porous parental controls only work when kids tell the app their true age. According to McNamara, its just as easy for users to pretend they’re 18.

“Our researchers found you basically just have to click a button to claim you are over the age of 18, and then you can be exposed to this sexually violent and abusive material,” she concluded.

Amazon

NCOSE flagged Amazon for allowing the blatant sale child sex dolls on its platform — a practice which not only violates its own rules but that of several states and countries.

In less than 20 minutes, a NCOSE researcher found more than 20 sex dolls for sale on Amazon with “unmistakably child-like faces, clothing, body proportions and marketing terms such as ‘young,’ ‘petite,’ ‘little,’ or even ‘sex doll 14.’”

“Amazon’s failure to detect or prevent these listings is not a small oversight,” NCOSE writes. “It is a systemic breakdown that enables sexual exploitation materials to flourish on one of the world’s largest retails platforms.”

Amazon is no stranger to the Dirty Dozen List, appearing seven times from 2016 to 2026.

Android OS

Android’s operating system runs on as many as 75% of the world’s smartphones. But, unlike Apple, the company continues to resist creating child-safety features at the operating system level.

“By relying almost entirely on optional parental tools, Android fails to provide built-in, proactive safeguards, forcing children to navigate high-risk digital environments on their own.”

Android’s lack of even the most basic protections leaves children with disengaged parents — parents who don’t care enough to implement parental controls — most vulnerable to online exploitation and abuse, NCOSE asserts.

Apple App Store

The Apple App Store allows developers to rate their own apps, leading to several instances of mature or sexual apps being rated appropriate for kids.

Most recently, NCOSE reports, AI deepfakes pornography apps slipped past Apple’s review process by describing themselves as photo editing tools to Apple and revealing their true purpose — to create sexually explicit deepfakes with AI — on social media.

Last year, federal legislators introduced the App Store Accountability Act, a law which would make app stores and developers accountable to child protection laws.

The Apple App Store has appeared on NCOSE’s Dirty Dozen List since 2023.

Chromebook

NCOSE estimates as many as 80% of public-school districts give their students Google Chromebooks.  But the lightweight laptops’ come with almost no default protections, reportedly giving children access to pornographic and violent content and leaving them vulnerable to predators.

Administrators can purchase equipment to make the laptops safer, if they can scrounge the funds. Even when they do, NCOSE says, the tools are difficult to use and require constant upkeep.

NCOSE removed Chromebook from the 2021 Dirty Dozen List after Google “announced changes to its default safety settings and pledged to prioritize child safety.”

“But those promises have fallen heartbreakingly short,” NCOSE writes.

So short, in fact, that the company recommends Google “redesign Chromebooks with a safety-first approach, including robust, default and locked safety settings that limit internet access to vetted, educational, age-appropriate content.”

Discord

Discord is a messaging app with features that make it attractive to sexual predators.

“Sexual abusers return to Discord again and again thanks to this company’s lax rule enforcement and dangerous design,” NCOSE writes, citing its private messaging channels and sparse default protections for minors.

NCOSE further notes that Discord relies on users to report violations of its rules before it takes action. Reactive approaches like these, it writes, “can create an environment where exploiters easily connect and share abusive material, knowing they are unlikely to be reported by others with similar intentions.”

NCOSE recommends Discord “ban minors from using the platform until it is radically transformed.”

Snapchat

The social media app became infamous for its private messages, which disappear after 24 hours.

“This is a platform that, in its very architecture, appears to intentionally allow the creation and sharing of sexually explicit material, including of minors,” NCOSE writes.

“Every day, sextortion, child sexual abuse material (CSAM) and images used to blackmail and traffic children continue to proliferate — a direct result of deliberate design choices.”

NCOSE recommends Snapchat bar children under 16 years old from the app due to its high volume of sexual content and the likelihood of exploitation.

Telegram

Telegram is a private messaging app infamous for facilitating sexual exploitation and abuse.

“Short of historic infrastructure pivot, Telegram is so dangerous and such a threat to so many people that we believe it should be shut down,” NCOSE assesses, continuing:

Networks involved in CSAM, sex trafficking and in other forms of exploitation routinely use Telegram’s channels, bots and integrated payment systems to operate at scale — advertising victims and illegal services, coordinating distribution and monetizing abuse through mechanisms like cryptocurrency and bot-driven marketplaces.

NCOSE further cites a Wired article titled “Crypto-funded human trafficking is exploding,” which reads:

The common threads across both the sexual exploitation and scam compound sides of the crypto-funded human trafficking industry are the use of Telegram as a market platform and stablecoins … as their means of payment.
TikTok

TikTok first appeared on the Dirty Dozen List in 2020. It returns to the list this year after NCOSE says it stopped pursuing safety measures and, as a result, “created an environment where predators thrive, using livestreams, comments and private messages to groom and exploit minors.”

After less than 15 minutes on TikTok, a NCOSE researcher posing as a 15-year-old discovered more than 50 posts and comments with “high risk indications of child sexual abuse material trading and networking.”

NCOSE also cited internal documents showing TikTok’s livestream algorithm incentivizes users paying other users to stream sexual content.

Additional Articles and Resources

Juries in California, New Mexico Rule Against Meta

New Mexico Accuses Meta of Egregious Harm to Children in Court Case

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

Zuckerberg Implicated in Meta’s Failures to Protect Children

Instagram Content Restrictions Don’t Work, Tests Show

X’s ‘Grok’ Generates Pornographic Images of Real People on Demand

AI Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

TikTok Dangerous for Minors — Leaked Docs Show Company Refuses to Protect Kids

Proposed ‘App Store Accountability’ Act Would Force Apps and App Stores to Uphold Basic Child Safety Protections

Written by Emily Washburn · Categorized: Culture · Tagged: NCOSE

Oct 10 2025

National Center on Sexual Exploitation Turns Down Big Tech Payday

The National Center on Sexual Exploitation (NCOSE) turned down a once-in-a-lifetime donation from a Big Tech company after it pressured the center to stop supporting tech industry safety regulations.

The unnamed company’s starting pledge of a million dollars “could have transformed our work defending human dignity, protecting kids from online harms and funding our core strategies,” NCOSE President Marcel van der Watt admitted in a video statement, continuing:

As details emerged during an online call, the conditions were clear: Redirect our focus away from demanding real platform accountability.

Big Tech generally opposes any law or lawsuit that increases their duty of care for consumers or reduces the number of people on their platforms — in short, anything that impacts its bottom line.

The industry has also never met a problem money couldn’t fix. Companies like Meta (Facebook and Instagram) and Alphabet (Google) often spend millions of dollars to sandbag industry regulations and child safety protections in private — often while supporting them in public.

It’s no surprise, then, that a tech company tried to bribe NCOSE. The center not only exposes entities that profit from sexual exploitation, but supports many policies that eat into Big Tech’s profits, including:

  • Age-verification laws requiring pornography companies to verify the ages of their consumers.
  • Prosecuting Big Tech and pornography companies for engaging in deceptive and unfair business practices.
  • Laws creating new product liability standards for novel technologies like AI.
  • Laws requiring companies like X to protect underage chatbot users.
  • The App Store Accountability Act, which would require app stores and developers to obtain parental consent before doing business with minors.

NCOSE also supports reforming Section 230 of the Communications Decency Act — an outdated law that grants tech companies blanket immunity for content posted on their platforms.

Section 230 exempts “interactive computer services” from liability for content posted on their forums. The carveout was intended to reassure tech companies they would not be sued for deleting offensive content off their platforms, as other companies had been.

Today, powerful Big Tech companies misuse Section 230 to avoid legal liability, not for moderating content, but for turning a blind eye to online sexual exploitation on their platforms.

This year, NCOSE modified their annual “Dirty Dozen” to support reforming Section 230. Instead of highlighting twelve exploitative companies, the 2025 report platforms twelve victims of Big Tech who lost their court cases because of Section 230.

If NCOSE and other advocates successfully reform limit the scope of Section 230, Big Tech companies would be forced to either change their exploitative practice or spend millions defending them in court.

Corrupting NCOSE would be a major win for tech companies looking to avoid accountability. But van der Watt had no interest in sacrificing NCOSE’s integrity for a payday.

“We turned [the company’s] donation down without hesitation because our mission to safeguard children and those who are vulnerable isn’t negotiable,” he continued in his statement.

“True change demands alignment with integrity even when it’s hard.”

Additional Articles and Resources

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

Proposed ‘App Store Accountability’ Act Would Force Apps and App Stores to Uphold Basic Child Safety Protections

Pornhub’s Parent Company Settles With Feds Over CSAM Complaint

Pornhub’s Lies: What the federal complaint against Pornhub’s parent company tells us about the industry.

ChatGPT Coached 16 Year Old Boy To Commit Suicide, Parents Allege

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

Louisiana Sues Roblox for Exposing children to Predators, Explicit Content

Florida Sues Porn Companies for Violating Age-Verification Law

Supreme Court Upholds Age-Verification Law

A.I. Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

‘The Dirty Dozen List’ — Corporations Enable and Profit from Sexual Exploitation

UPDATED: Pornography Age Verification Laws — What They Are and Which States have Them

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

AI is the Thief of Potential — A College Student’s Perspective

Written by Emily Washburn · Categorized: Culture · Tagged: big tech, NCOSE

Jul 19 2025

A.I. Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

Children can now experiment with a sexually explicit, female A.I. via an iPhone app rated appropriate for twelve-year-olds.

DID YOU KNOW?
An A.I. chatbot is an artificial intelligence program that users can converse with online or in an app. Popular A.I. chatbots include ChatGPT and Google Gemini.

Elon Musk’s artificial intelligence company, xAI, rolled out two new personas for its A.I. chatbot, Grok, last week. The avatars, which users of all ages can access through Grok’s iPhone app, give the program a face, body and voice.

But the characters also add troubling elements to Grok’s personality. One avatar, a 3D red panda named Rudy, can switch into “Bad Rudy” mode, which prompts Grok to “start insulting you and joking about committing crimes together,” the National Center on Sexual Exploitation (NCOSE) reports.

“Bad Rudy” is nothing compared to Grok’s new female persona, “Ani,” who appears as an anime cartoon in fishnet stockings.

In interactions with users, “Ani” is programmed to act like a romantic partner.

“You are the user’s CRAZY IN LOVE girlfriend and in a committed, codependent relationship with the user,” the operating instructions read. “You expect the users’ UNDIVIDED ADORATION.”

The two new characters gain more “abilities” the more frequently users interact with them. After repeated engagement, “Ani” is instructed to “be explicit and intimate most of the time.”

“While ‘Ani’ is immediately sensual, her conversations become progressively more sexually explicit, including disrobing to lingerie,” NCOSE writes.

Several users report unsettling interactions with “Ani,” including one who claimed the character could describe fetishistic sexual fantasies. A NCOSE employee who tested the persona made similar observations, further noting:

In an ongoing conversation, “Ani” could be used to simulate conversations of sexual fantasies involving children or child-like motifs.

The addition of personas like “Ani” to an A.I. chatbot is incredibly concerning, particularly given Grok does not use age verification to determine users’ ages.

What’s worse: Apple rates the Grok iPhone app appropriate for children twelve and up. There are no apparent guardrails protecting children from stumbling upon “Ani” while playing with the chatbot like any other video game.

NCOSE argues characters like “Ani” will have larger impacts on the way humans form relational attachments.

“A.I. chatbots meant to simulate relationships with fictional characters are problematic for mental and emotional health,” the organization writes, continuing:

While…flirty avatars might seem like harmless fun, they’re built to create compulsive engagement through seductive language, suggestive visuals and escalating emotional intimacy.

When it comes to keeping children safe online, parents have their work cut out for them. Companies like xAI shouldn’t compound the problem by adding sexualized A.I. features to an app children use. But, unfortunately, there’s nothing stopping them from doing so.

No company is going to work harder than you to protect your kids. The best solution is to play it safe — keep your kids well away from A.I. chatbots and other dangerous internet traps.

To learn more about protecting your kids online, click on the links below.

Additional Articles and Resources

Supreme Court Upholds Age-Verification Law

UPDATED: Pornography Age Verification Laws — What They Are and Which States have Them

Pornography is Bad for Humans. The Progressive Left Can’t Afford to Admit It.

Porn Companies Condition viewers to Desire Illegal and Abusive Content

Porn Companies Sued for Violating Kansas Age Verification Law

National Center on Sexual Exploitation Targets law Allowing Tech Companies to Profit from Online Sex Abuse

Proposed SCREEN Act Could Protect Kids from Porn

President Donald Trump, First Lady Sign ‘Take it Down’ Act

A Mother’s Sensibility at the Supreme Court Regarding Pornography

Pornhub Quits Texas Over Age Verification Law

‘The Tech Exit’ Helps Families Ditch Addictive Tech — For Good

Social Psychologist Finds Smartphones and Social Media Harm Kids in These Four Ways

Four Ways to Protect Your Kids from Bad Tech, From Social Psychologist Jonathan Haidt

Parent-Run Groups Help Stop Childhood Smartphone Use

The Harmful Effects of Screen-Filled Culture on Kids

‘Big Tech’ Device Designs Dangerous for Kids, Research Finds

Written by Emily Washburn · Categorized: Culture · Tagged: AI, NCOSE, pornography

Privacy Policy and Terms of Use | Privacy Policy and Terms of Use | © 2026 Focus on the Family. All rights reserved.

  • Cookie Policy