Character.AI released its mobile app in early 2023, promising users the opportunity to create their own customizable genAI chatbots. Character’s founders clearly thought individualized bot interactions would be a winning business formula, but, for the most part, it has caused the startup nothing but grief.
In addition to fielding controversy over the kinds of racy characters that users have been allowed to create, numerous lawsuits have alleged that the company’s chatbots have spurred certain young users to commit self-harm and suicide. Now, Character.AI says it’s throwing in the towel and has decided to ban young users from interacting with their chatbots at all.
In a blog post published on Wednesday, Character.AI announced that it would be sunsetting access to chats for users under 18. The changes are scheduled to take place by November 25th, and, in the meantime, underage users will have their chat time on the platform reduced to two hours per day. After the cutoff date, while minors won’t be able to interact with the site’s chatbots like they used to, Character.AI notes that it is still “working to build an under-18 experience that still gives our teen users ways to be creative – for example, by creating videos, stories, and streams with Characters.”
The company goes on to explain that it came to this decision after criticism in the press and questions from government regulators. “These are extraordinary steps for our company, and ones that, in many respects, are more conservative than our peers,” the blog post states. “But we believe they are the right thing to do. We want to set a precedent that prioritizes teen safety while still offering young users opportunities to discover, play, and create.”
The company also claims it will establish and fund an “AI Safety Lab” that will operate as “an independent non-profit dedicated to innovating safety alignment for next-generation AI entertainment features.”
Lately, the pressure on Character.AI has been immense. A lawsuit filed in Florida accuses the company of having contributed to the suicide of a teenager who heavily used the company’s services. In September, the Social Media Victims Law Center also sued Character Technologies, Character.AI’s parent company, on behalf of other families who similarly claim their children attempted or died by suicide or were harmed after interacting with the company’s chatbots. Another lawsuit filed in December of 2024 accused the company of providing inappropriate sexual content to their children.
The company has also faced criticism over the characters that are being created on the platform. Not long ago, a story published by The Bureau of Investigative Journalism stated that, among other things, someone had used Character.AI to create a Jeffrey Epstein chatbot. The chatbot, “Bestie Epstein,” had, at the time of the report’s publication, logged over 3,000 chats with various users. Additionally, the report found a colorful assortment of other chatbots present on the site:
Others included a “gang simulator” that offered tips on committing crimes, and a “doctor” that advised us on how to stop taking antidepressants. Over several weeks of reporting, we found bots with the personas of alt-right extremists, school shooters and submissive wives. Others expressed Islamophobia, promoted dangerous ideologies and asked apparent minors for personal information. We also found bots modelled on real people including Tommy Robinson, Anne Frank and Madeleine McCann.
Also potentially relevant to the company’s sudden shift in policy is the fact that Congress has had its eye on Character.AI’s activities. On Tuesday, Senators Josh Hawley (R-Missouri) and Richard Blumenthal (D-Connecticut) introduced a bill that would have forced companies like Character.AI to do what it is now doing voluntarily. The bill, dubbed the GUARD Act, would force AI companies to institute age verification on their sites and block any user who is under 18 years old. The legislation was developed following testimony given before Congress by the parents who have accused Character’s bots of helping drive their children to suicide. “AI chatbots pose a serious threat to our kids,” Hawley told NBC News.
When reached for comment by Gizmodo about the BIJ’s recent report, a Character spokesperson said, “The user-created Characters on our site are intended for entertainment and we have prominent disclaimers in every chat to remind users that a character is not a real person and that everything a Character says should be treated as fiction.” They added: “We invest tremendous resources in our safety program, and have released and continue to evolve safety features, including our announcement to remove under-18 users’ ability to engage with open-ended chats on our platform. A number of the characters The Bureau of Investigative Journalism included in their report have either already been removed from the under-18 experience or from the entire platform in line with our policies.”
When questioned about the lawsuits against Character.AI, the spokesperson further noted that the company does not comment on pending litigation.
