In 2022, the founders of chatbot startup Character.AI launched a platform that allows anyone to create interactive characters using artificial intelligence (AI).
The app has exploded, quickly growing to over 20 million users and creating over 10 million chatbot characters.
Many of the users who create these characters were young adults before they became young adults. In November 2025, Character.AI banned users under the age of 18 amid mounting social and legal pressure over youth suicides related to its use. This decision comes after numerous attempts to improve safety for young people, including stricter parental controls and content filters.
This ban is an attempt to protect teenagers from potential harm. But their more creative, playful, and emotionally expressive AI experiments were also silenced.
Our new research, published in the Proceedings of the Association for Computing Machinery CHI Conference 2026, captures and preserves the new ways young people are experimenting with AI so we can build on it for the better.
What are teenagers actually using AI chatbots for?
By 2026, 3 in 10 US teenagers will be using AI every day. The idea of using AI for friendships has taken media headlines and app stores by storm, with hundreds of apps available.
Media coverage of AI companions taps into two main fears. One is that young people will replace human friendships with AI. The other is that teens end up losing social skills when they interact with flattering chatbots instead of real people.
These concerns are important. However, peer relationships account for a surprisingly small proportion of the reasons why young people actually use AI. According to a recent Pew Research Center survey, the most common reasons for teens to use their devices were to seek information (57%), do homework (54%), and have fun (47%). Only a few (12%) used AI for emotional support or advice. Romance and alleviating loneliness often rank lowest as motivations for AI use among teens: 4-6% and 8-11%, respectively.
If AI chatbots are introduced almost exclusively as companions, there is a risk that a large part of how teenagers spend their time with AI will be overlooked.
Our team set out to understand what young people do when they have the freedom to use AI outside the confines of school, choosing to have fun, play with it, and create characters of their own design.
AI as entertainment
Before the ban, Character.AI was a popular “AI entertainment” destination for young people. There are still viral TikTok channels featuring characters from popular youth media, from Peppa Pig to Call of Duty.
Our team spent over eight months, from July 2024 to March 2025, immersed in Character.AI’s official community on Discord, an online chat platform with over 500,000 members. We systematically analyzed 2,236 posts by young people aged 13-17. Of these users, the majority (68.2%) identify as female or non-binary. and 59% had created their own AI characters.
Through analysis of youth discussions on the platform, we identified three core intentions behind their engagement with Character.AI: restore, explore, and transform.
restoration
My favorite period pain relief bot is Percy Jackson
Young people used characters for emotional comfort, venting, escapism, and mood management. Rather than reflecting formal clinical practice, we observed young people discussing “comfort bots” in which they role-played soft, gentle, and gentle characters with familiar characters.
A character from a beloved book comforted someone who was on their period, or a character from a popular comic book gave someone some encouragement for their next math test.
expedition
Character.AI helped me find the creative spark within me
Young people explored boundaries, engaged in creative world-building, and expanded their fan base. A teenager wrote a three-book story based on the interactions between the characters. Another, inspired by his love of theater, created a troupe of traveling theatrical characters. They reported that this use transferred their skills to the real world, increased their creativity, and improved their writing.
conversion
I have a character who suffers from mental health issues and tends to project his own persona while RPing [roleplay]
Young people used AI to try out different identities, handle real-world relationships, and recreate difficult real-world scenarios. Some people have created “clones” of themselves with superpowers or self-affirming versions of themselves.
Drawing inspiration from reality, they discussed creating characters that reflected difficult relationships in the real world, such as a “toxic friend,” an “annoying sister,” and a “foster care agent.”
Characters created with a purpose
We also mapped seven different character archetypes that the young people were creating and discussing.
- Soother – someone who provides emotional support
- Narrator – a cast of characters for role-playing
- Trickster – jokes, tests and transgressive chats
- Icon – Remixed celebrity or fandom figure
- Dark Souls – Angry, emotionally complex characters
- Proxy – Modeled after the lives of real people;
- Mirror – a clone of self.
These archetypes are the central finding of our research. Instead of flattering or romanticized chatbot engagement, young people are intentionally creating characters that are angry, transgressive, playful, creative, and thoughtful.
This shows that we need to stop treating “companion AI” as if it were one homogeneous entity. Treating AI chatbots as a category is akin to treating all screen time as the same experience, whether your child is watching Bluey with their family or doomscrolling short-form content on their phone alone at night when they should be sleeping.
Aiming for a better chatbot
The American Academy of Pediatrics recently changed its guidelines for screen time from set time limits to a framework that considers the individual child, their usage, family relationships, and environment.
The same logic should apply to AI chatbots. This means going beyond asking adults about children’s use of AI to testing AI products with fake accounts for specific use cases and banning access before listening to young people’s experiences, experiments, and ideas for the future.
Bans are a reaction to bad design, but they won’t lead to better, safer AI products for teens.
The answer is not to keep young people away from AI forever. Rather, it’s about building AI that is worthy of their trust, that nurtures their creativity, and that keeps them grounded in the physical world with family, friends, and community.
