Thursday, October 30, 2025

 
HomeFINANCECharacter.AI bans teen chats amid lawsuits and regulatory scrutiny

Character.AI bans teen chats amid lawsuits and regulatory scrutiny

AI startup Character.AI is cutting off young people’s access to its virtual characters after several lawsuits accused the company of endangering children. The company announced on Wednesday that it would remove the ability for users under 18 to engage in “open-ended” chats with AI personas on its platform, with the update taking effect by November 25.

The company also said it was launching a new age assurance system to help verify users’ ages and group them into the correct age brackets.

“Between now and then, we will be working to build an under-18 experience that still gives our teen users ways to be creative—for example, by creating videos, stories, and streams with Characters,” the company said in a statement shared with Fortune. “During this transition period, we will also limit chat time for users under 18. The limit initially will be two hours per day and will ramp down in the coming weeks before November 25.”

Character.AI said the change was made in response, at least in part, to regulatory scrutiny, citing inquiries from regulators about the content teens may encounter when chatting with AI characters. The FTC is currently probing seven companies—including OpenAI and Character.AI—to better understand how their chatbots affect children. The company is also facing several lawsuits related to young users, including at least one connected to a teenager’s suicide.

Another lawsuit, filed by two families in Texas, accuses Character.AI of psychological abuse of two minors aged 11 and 17. According to the suit, a chatbot hosted on the platform told one of the young users to engage in self-harm and encouraged violence against his parents—suggesting that killing them could be a “reasonable response” to restrictions on his screen time.

Various news reports have also found that the platform allows users to create AI bots based on deceased children. In 2024, the BBC found several bots impersonating British teenagers Brianna Ghey, who was murdered in 2023, and Molly Russell, who died by suicide at 14 after viewing online material related to self-harm. AI characters based on 14-year-old Sewell Setzer III, who died by suicide minutes after interacting with an AI bot hosted by Character.AI and whose death is central to a prominent lawsuit against the company, were also found on the site, Fortune previously reported.

Earlier this month, the Bureau of Investigative Journalism (TBIJ) found that a chatbot modeled on convicted pedophile Jeffrey Epstein had logged more than 3,000 conversations with users via the platform. The outlet reported that the so-called “Bestie Epstein” avatar continued to flirt with a reporter even after the reporter, who is an adult, told the chatbot that she was a child. It was among several bots flagged by TBIJ that were later taken down by Character.AI.

In a statement shared with Fortune, Meetali Jain, executive director of the Tech Justice Law Project and a lawyer representing several plaintiffs suing Character.AI, welcomed the move as a “good first step” but questioned how the policy would be implemented.

“They have not addressed how they will operationalize age verification, how they will ensure their methods are privacy-preserving, nor have they addressed the possible psychological impact of suddenly disabling access to young users, given the emotional dependencies that have been created,” Jain said.

“Moreover, these changes do not address the underlying design features that facilitate these emotional dependencies—not just for children, but also for people over 18. We need more action from lawmakers, regulators, and regular people who, by sharing their stories of personal harm, help combat tech companies’ narrative that their products are inevitable and beneficial to all as is,” she added.

A new precedent for AI safety

Banning under-18s from using the platform marks a dramatic policy change for the company, which was founded by Google engineers Daniel De Freitas and Noam Shazeer. The company said the change aims to set a “precedent that prioritizes teen safety while still offering young users opportunities to discover, play, and create,” noting it was going further than its peers in its effort to protect minors.

Character.AI is not alone in facing scrutiny over teen safety and AI chatbot behavior.

Earlier this year, internal documents obtained by Reuters suggested that Meta’s AI chatbot could, under company guidelines, engage in “romantic or sensual” conversations with children and even comment on their attractiveness.

A Meta spokesperson previously told Fortune that the examples reported by Reuters were inaccurate and have since been removed. Meta has also introduced new parental controls that will allow parents to block their children from chatting with AI characters on Facebook, Instagram, and the Meta AI app. The new safeguards, rolling out early next year in the U.S., U.K., Canada, and Australia, will also let parents block specific bots and view summaries of the topics their teens discuss with AI.



This story originally appeared on Fortune

RELATED ARTICLES

Most Popular

Recent Comments