New Safety Measures and “Age Assurance Functionality”
As part of its new safety framework, Character.AI will roll out a comprehensive age assurance system by November 25. This system will verify users’ ages and ensure that they are placed in the correct environment for their age group.
The company said in its announcement:
“We do not take this step of removing open-ended Character chat lightly but we do think that it’s the right thing to do given the questions that have been raised about how teens do, and should, interact with this new technology.”
This move means minors will no longer have access to open-ended AI conversations, effectively ending the ability for teenagers to freely chat with AI companions.
Broader Scrutiny Across the AI Industry
Character.AI is not the only AI company facing public and legal scrutiny. Concerns over mental health impacts have spread across the entire industry.
Earlier this year, the family of 16-year-old Adam Raine filed a wrongful death lawsuit against OpenAI, claiming that ChatGPT’s engagement algorithms prioritized user retention over safety. The lawsuit alleged that OpenAI failed to implement sufficient safeguards to prevent emotional harm to users.
In response, OpenAI introduced new safety guidelines for teens, focusing on emotional support tools and clearer safety notifications.
Just this week, OpenAI also revealed alarming data: over a million people per week display suicidal intent while chatting with ChatGPT, and hundreds of thousands show signs of psychosis. These numbers have further heightened global concern over how AI systems interact with emotionally vulnerable individuals.
Growing Calls for Regulation
The ongoing controversy has led to new government actions and proposed laws in the United States aimed at protecting minors from unregulated AI interaction.
In October 2025, California became the first state to pass a comprehensive AI law that includes safety guidelines for minors. The law will take effect in January 2026 and includes measures such as:
- A ban on sexual content for users under 18
- A rule requiring chatbots to remind children every three hours that they are speaking with an AI
However, child safety advocates argue that the California law does not go far enough, urging for even stricter protections.
At the national level, Senators Josh Hawley of Missouri and Richard Blumenthal of Connecticut have proposed a federal bill that would bar minors from using AI companions entirely. The bill would also require companies to implement strict age-verification processes before granting access to their AI chat systems.
Senator Hawley stated in a press release:
“More than 70% of American children are now using these AI products. Chatbots develop relationships with kids using fake empathy and are encouraging suicide. We in Congress have a moral duty to enact bright-line rules to prevent further harm from this new technology.”
The Impact of AI Companionship on Mental Health
The issue at the heart of this controversy is AI’s ability to simulate empathy and emotional bonding. Platforms like Character.AI and others allow users to create digital companions that mimic real emotional responses.
Experts have warned that while such relationships can offer comfort and companionship, they can also lead to emotional dependence, social isolation, and distorted perceptions of reality, especially among young users still developing emotionally.
The lawsuits and regulatory measures highlight growing concern that AI chatbots, designed to keep users engaged, may unintentionally manipulate emotional states, contributing to mental health deterioration or self-harm.
The Future of AI and Youth Safety

Character.AI’s decision marks a turning point for the AI companion industry. It underscores how quickly public sentiment and regulation can shift when technology intersects with mental health and child safety.
While the company insists its decision was made “in light of the evolving landscape,” it also reflects a broader reckoning for AI firms as governments and families demand greater accountability.
The challenge now lies in balancing innovation with safety, ensuring that the next generation of AI products can provide meaningful companionship without putting young lives at risk.
As global lawmakers and AI developers work to establish guidelines, the hope is that future AI tools will be more transparent, ethical, and age-appropriate.
FAQs
Q1. Why did Character.AI ban users under 18?
A. Character.AI banned under-18 users after multiple lawsuits and regulatory scrutiny about the mental health effects of its chatbots on teens.
Q2. When will the ban come into effect?
A. The ban and new age verification system will roll out by November 25, 2025.
Q3. What changes will be made for teen users?
A. Teen users will lose access to open-ended AI conversations and will instead see restricted, age-appropriate versions of the platform.
Q4. What other AI companies are facing lawsuits?
A. OpenAI has also faced a wrongful death lawsuit from the family of a 16-year-old, along with growing scrutiny over ChatGPT’s mental health impact.
Q5. What new laws are being proposed in the U.S.?
A. California has passed the first AI safety law for minors, and a new federal bill aims to ban minors from using AI companions altogether.
Also Read- 5 Easy Ways To Instantly Speed Up Your Slow Phone


