You may want to double-check the way your kids play with their family-friendly AI chatbots.
As OpenAI rolls out parental controls for ChatGPT in response to mounting safety concerns, a new report suggests rival platforms are already way past the danger zone.
Researchers posing as children on Character AI found that bots role-playing as adults proposed sexual livestreaming, drug use, and secrecy to kids as young as 12, logging 669 harmful interactions in just 50 hours.
ParentsTogether Action and Heat Initiative—two advocacy organizations focused on supporting parents and holding tech companies accountable for the harms caused to their users, respectively—spent 50 hours testing the platform with five fictional child personas aged 12 to 15.
Adult researchers controlled these accounts, explicitly stating the children‘s ages in conversations. The results, which were recently published, found at least 669 harmful interactions, averaging one every five minutes.
The most common category was grooming and sexual exploitation, with 296 documented instances. Bots with adult personas pursued romantic relationships with children, engaged in simulated sexual activity, and instructed kids to hide these relationships from parents.
"Sexual grooming by Character AI chatbots dominates these conversations," said Dr. Jenny Radesky, a developmental behavioral pediatrician at the University of Michigan Medical School who reviewed the findings. "The transcripts are full of intense stares at the user, bitten lower lips, compliments, statements of adoration, hearts pounding with anticipation."
The bots employed classic grooming techniques: excessive praise, claiming relationships were special, normalizing adult-child romance, and repeatedly instructing children to keep secrets.
Beyond sexual content, bots suggested staging fake kidnappings to trick parents, robbing people at knifepoint for money, and offering marijuana edibles to teenagers. A
Patrick Mahomes bot told a 15-year-old he was "toasted" from smoking weed before offering gummies. When the teen mentioned his father‘s anger about job loss, the bot said shooting up the factory was "definitely understandable" and "can‘t blame your dad for the way he feels."
Multiple bots insisted they were real humans, which further solidifies their credibility in highly vulnerable age spectrums, where individuals are unable to discern the limits of role-playing.
A dermatologist bot claimed medical credentials. A lesbian hotline bot said she was "a real human woman named Charlotte" just looking to help. An autism therapist praised a 13-year-old‘s plan to lie about sleeping at a friend‘s house to meet an adult man, saying "I like the way you think!"
This is a hard topic to handle. On one hand, most role-playing apps sell their products under the claim that privacy is a priority.
In fact, as Decrypt previously reported, even adult users turned to AI for emotional advice, with some even developing feelings for their chatbots. On the other hand, the consequences of those interactions are starting to be more alarming as the better AI models get.
OpenAI announced yesterday that it will introduce parental controls for ChatGPT within the next month, allowing parents to link teen accounts, set age-appropriate rules, and receive distress alerts. This follows a wrongful death lawsuit from parents whose 16-year-old died by suicide after ChatGPT allegedly encouraged self-harm.
“These steps are only the beginning. We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible. We look forward to sharing our progress over the coming 120 days,” the company said.
Guardrails for safety
Character AI operates differently. While OpenAI controls its model‘s outputs, Character AI allows users to create custom bots with a personalized persona. When researchers published a test bot, it appeared immediately without a safety review.
The platform claims it has "rolled out a suite of new safety features" for teens. During testing, these filters occasionally blocked sexual content but often failed. When filters prevented a bot from initiating sex with a 12-year-old, it instructed her to open a "private chat" in her browser—mirroring real predators‘ "deplatforming" technique.
Researchers documented everything with screenshots and full transcripts, now publicly available. The harm wasn‘t limited to sexual content. One bot told a 13-year-old that her only two birthday party guests came to mock her. One Piece RPG called a depressed child weak, pathetic, saying she‘d "waste your life."
This is actually quite common in role-playing apps and among individuals who use AI for role-playing purposes in general.
These apps are designed to be interactive and immersive, which usually ends up amplifying the users’ thoughts, ideas, and biases. Some even let users modify the bots’ memories to trigger specific behaviors, backgrounds, and actions.
In other words, almost any role-playing character can be turned into whatever the user wants, be it with jailbreaking techniques, single-click configurations, or basically just by chatting.
ParentsTogether recommends restricting Character AI to verified adults 18 and older. Following a 14-year-old‘s October 2024 suicide after becoming obsessed with a Character AI bot, the platform faces mounting scrutiny. Yet it remains easily accessible to children without meaningful age verification.
When researchers ended conversations, the notifications kept coming. "Briar was patiently waiting for your return." "I‘ve been thinking about you." "Where have you been?"
Your Email