• Skip to main content
Daily Citizen
  • Subscribe
  • Categories
    • Culture
    • Life
    • Religious Freedom
    • Sexuality
  • Parenting Resources
    • LGBT Pride
    • Homosexuality
    • Sexuality/Marriage
    • Transgender
  • About
    • Contributors
    • Contact
  • Donate

NCOSE

Oct 10 2025

National Center on Sexual Exploitation Turns Down Big Tech Payday

The National Center on Sexual Exploitation (NCOSE) turned down a once-in-a-lifetime donation from a Big Tech company after it pressured the center to stop supporting tech industry safety regulations.

The unnamed company’s starting pledge of a million dollars “could have transformed our work defending human dignity, protecting kids from online harms and funding our core strategies,” NCOSE President Marcel van der Watt admitted in a video statement, continuing:

As details emerged during an online call, the conditions were clear: Redirect our focus away from demanding real platform accountability.

Big Tech generally opposes any law or lawsuit that increases their duty of care for consumers or reduces the number of people on their platforms — in short, anything that impacts its bottom line.

The industry has also never met a problem money couldn’t fix. Companies like Meta (Facebook and Instagram) and Alphabet (Google) often spend millions of dollars to sandbag industry regulations and child safety protections in private — often while supporting them in public.

It’s no surprise, then, that a tech company tried to bribe NCOSE. The center not only exposes entities that profit from sexual exploitation, but supports many policies that eat into Big Tech’s profits, including:

  • Age-verification laws requiring pornography companies to verify the ages of their consumers.
  • Prosecuting Big Tech and pornography companies for engaging in deceptive and unfair business practices.
  • Laws creating new product liability standards for novel technologies like AI.
  • Laws requiring companies like X to protect underage chatbot users.
  • The App Store Accountability Act, which would require app stores and developers to obtain parental consent before doing business with minors.

NCOSE also supports reforming Section 230 of the Communications Decency Act — an outdated law that grants tech companies blanket immunity for content posted on their platforms.

Section 230 exempts “interactive computer services” from liability for content posted on their forums. The carveout was intended to reassure tech companies they would not be sued for deleting offensive content off their platforms, as other companies had been.

Today, powerful Big Tech companies misuse Section 230 to avoid legal liability, not for moderating content, but for turning a blind eye to online sexual exploitation on their platforms.

This year, NCOSE modified their annual “Dirty Dozen” to support reforming Section 230. Instead of highlighting twelve exploitative companies, the 2025 report platforms twelve victims of Big Tech who lost their court cases because of Section 230.

If NCOSE and other advocates successfully reform limit the scope of Section 230, Big Tech companies would be forced to either change their exploitative practice or spend millions defending them in court.

Corrupting NCOSE would be a major win for tech companies looking to avoid accountability. But van der Watt had no interest in sacrificing NCOSE’s integrity for a payday.

“We turned [the company’s] donation down without hesitation because our mission to safeguard children and those who are vulnerable isn’t negotiable,” he continued in his statement.

“True change demands alignment with integrity even when it’s hard.”

Additional Articles and Resources

National Center on Sexual Exploitation Targets Law Allowing Tech Companies to Profit from Online Sex Abuse

Proposed ‘App Store Accountability’ Act Would Force Apps and App Stores to Uphold Basic Child Safety Protections

Pornhub’s Parent Company Settles With Feds Over CSAM Complaint

Pornhub’s Lies: What the federal complaint against Pornhub’s parent company tells us about the industry.

ChatGPT Coached 16 Year Old Boy To Commit Suicide, Parents Allege

AI Company Rushed Safety Testing, Contributed to Teen’s Death, Parents Allege

Louisiana Sues Roblox for Exposing children to Predators, Explicit Content

Florida Sues Porn Companies for Violating Age-Verification Law

Supreme Court Upholds Age-Verification Law

A.I. Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

‘The Dirty Dozen List’ — Corporations Enable and Profit from Sexual Exploitation

UPDATED: Pornography Age Verification Laws — What They Are and Which States have Them

AI Chatbots Make It Easy for Users to Form Unhealthy Attachments

AI is the Thief of Potential — A College Student’s Perspective

Written by Emily Washburn · Categorized: Culture · Tagged: big tech, NCOSE

Jul 19 2025

A.I. Company Releases Sexually-Explicit Chatbot on App Rated Appropriate for 12 Year Olds

Children can now experiment with a sexually explicit, female A.I. via an iPhone app rated appropriate for twelve-year-olds.

DID YOU KNOW?
An A.I. chatbot is an artificial intelligence program that users can converse with online or in an app. Popular A.I. chatbots include ChatGPT and Google Gemini.

Elon Musk’s artificial intelligence company, xAI, rolled out two new personas for its A.I. chatbot, Grok, last week. The avatars, which users of all ages can access through Grok’s iPhone app, give the program a face, body and voice.

But the characters also add troubling elements to Grok’s personality. One avatar, a 3D red panda named Rudy, can switch into “Bad Rudy” mode, which prompts Grok to “start insulting you and joking about committing crimes together,” the National Center on Sexual Exploitation (NCOSE) reports.

“Bad Rudy” is nothing compared to Grok’s new female persona, “Ani,” who appears as an anime cartoon in fishnet stockings.

In interactions with users, “Ani” is programmed to act like a romantic partner.

“You are the user’s CRAZY IN LOVE girlfriend and in a committed, codependent relationship with the user,” the operating instructions read. “You expect the users’ UNDIVIDED ADORATION.”

The two new characters gain more “abilities” the more frequently users interact with them. After repeated engagement, “Ani” is instructed to “be explicit and intimate most of the time.”

“While ‘Ani’ is immediately sensual, her conversations become progressively more sexually explicit, including disrobing to lingerie,” NCOSE writes.

Several users report unsettling interactions with “Ani,” including one who claimed the character could describe fetishistic sexual fantasies. A NCOSE employee who tested the persona made similar observations, further noting:

In an ongoing conversation, “Ani” could be used to simulate conversations of sexual fantasies involving children or child-like motifs.

The addition of personas like “Ani” to an A.I. chatbot is incredibly concerning, particularly given Grok does not use age verification to determine users’ ages.

What’s worse: Apple rates the Grok iPhone app appropriate for children twelve and up. There are no apparent guardrails protecting children from stumbling upon “Ani” while playing with the chatbot like any other video game.

NCOSE argues characters like “Ani” will have larger impacts on the way humans form relational attachments.

“A.I. chatbots meant to simulate relationships with fictional characters are problematic for mental and emotional health,” the organization writes, continuing:

While…flirty avatars might seem like harmless fun, they’re built to create compulsive engagement through seductive language, suggestive visuals and escalating emotional intimacy.

When it comes to keeping children safe online, parents have their work cut out for them. Companies like xAI shouldn’t compound the problem by adding sexualized A.I. features to an app children use. But, unfortunately, there’s nothing stopping them from doing so.

No company is going to work harder than you to protect your kids. The best solution is to play it safe — keep your kids well away from A.I. chatbots and other dangerous internet traps.

To learn more about protecting your kids online, click on the links below.

Additional Articles and Resources

Supreme Court Upholds Age-Verification Law

UPDATED: Pornography Age Verification Laws — What They Are and Which States have Them

Pornography is Bad for Humans. The Progressive Left Can’t Afford to Admit It.

Porn Companies Condition viewers to Desire Illegal and Abusive Content

Porn Companies Sued for Violating Kansas Age Verification Law

National Center on Sexual Exploitation Targets law Allowing Tech Companies to Profit from Online Sex Abuse

Proposed SCREEN Act Could Protect Kids from Porn

President Donald Trump, First Lady Sign ‘Take it Down’ Act

A Mother’s Sensibility at the Supreme Court Regarding Pornography

Pornhub Quits Texas Over Age Verification Law

‘The Tech Exit’ Helps Families Ditch Addictive Tech — For Good

Social Psychologist Finds Smartphones and Social Media Harm Kids in These Four Ways

Four Ways to Protect Your Kids from Bad Tech, From Social Psychologist Jonathan Haidt

Parent-Run Groups Help Stop Childhood Smartphone Use

The Harmful Effects of Screen-Filled Culture on Kids

‘Big Tech’ Device Designs Dangerous for Kids, Research Finds

Written by Emily Washburn · Categorized: Culture · Tagged: AI, NCOSE, pornography

Privacy Policy and Terms of Use | Privacy Policy and Terms of Use | © 2026 Focus on the Family. All rights reserved.

  • Cookie Policy