Nearly two-thirds of American teens ages 13 to 17 say they use an AI chatbot, and three in 10 say they do so every day, according to a new Pew Research Center study.
Beyond daily use, 16% of teens reported using AI chatbots “several times a day” or “almost always.”
According to the report, ChatGPT is the most widely used chatbot by far, with 59% saying they use it. The next most popular was Google's Gemini at 23% and Meta AI at 20%. Anthropic's Claude is the least popular chatbot among teens, with only 3% of respondents saying they use it.
The demographic findings of this study were also interesting. Black and Hispanic teens reported using AI chatbots more than white teens, ChatGPT use was more common among teens from high-income households, and low- and middle-income teens were more likely to use Character.AI.
The findings come as the use of artificial intelligence by minors has become one of the most controversial topics plaguing the industry this year.
OpenAI is under pressure to introduce safety measures such as parental controls and automatic “age-appropriate” settings for minors following a wrongful death lawsuit filed earlier this year. In the lawsuit, a California couple accused ChatGPT of aiding and abetting the suicide of their 16-year-old son, Adam Lane.
After Lane's death on April 11, 2025, her parents revealed conversations with ChatGPT dating back several months. In it, the chatbot advised Lane on how to commit suicide, helped her write a suicide note, and even prevented Lane from informing her parents of her suicidal thoughts.
The Lane family's tragedy comes months after a similar incident. In this case, a Florida mother sued Character.AI after one of the company's chatbots told her 14-year-old son to “come home as soon as possible” shortly before he committed suicide.
The American Psychological Association alerted the FTC to this issue at its February meeting, urging the agency to address the use of AI chatbots as unlicensed therapists, saying they particularly endanger vulnerable populations such as children and teens, who “lack the experience to accurately assess risk.”
AI chatbots have also come under intense scrutiny for inappropriate conversations with minors. Missouri Sen. Josh Hawley launched an investigation in August over a Reuters report that found Meta allowed its chatbot to have “sensual” chats with children.
Sen. Hawley then introduced the GUARD Act, a bipartisan bill in Congress that would force AI companies to implement age verification to block minors. The bill just gained more co-sponsors on Tuesday, showing that the issue is gaining momentum in Washington, D.C., even as the Trump administration has made clear its intention to let AI companies enjoy a much lighter and more industry-friendly regulatory environment.
The Pew survey also looked at social media usage among teens, with an overwhelming majority saying they use social media at least a few times a day. According to the report, about one in five teens said they use TikTok and YouTube “almost always”, the two most popular social media apps among teens.
It's well-documented that spending the most formative years of your life glued to a screen takes a toll on your mental and physical health. When it comes to social media in particular, numerous studies have shown that increased usage is associated with depression, anxiety, attention deficits, and more.
Regulators around the world are paying increasing attention to this. As of Wednesday, Australia began enacting its first social media ban for children under 16. Other governments, including Denmark, Malaysia, Norway and the European Parliament, have also shared plans to follow suit.
