A new national survey shows that a significant number of American adolescents and young adults rely on artificial intelligence programs to support their mental health. The findings suggest that these digital tools are becoming a common resource for young people to overcome distress, despite the lack of established safety standards for these technologies. The study was published in JAMA Network Open.
The advent of generative artificial intelligence has changed the way individuals access information and interact with technology. Programs like ChatGPT and Google Gemini are widely adopted by different age groups because they provide instant responses to complex queries.
This technology is growing in popularity, but at the same time the United States is facing a severe decline in the mental health of young people. Statistics show that nearly one in five adolescents has experienced a major depressive episode within the past year.
The majority of these young people do not receive professional mental health care. Barriers such as high costs, limited availability of healthcare providers, and logistical challenges often impede access to traditional treatments.
In this context, artificial intelligence offers an accessible, affordable and private alternative. Until now, there has been little empirical evidence quantifying how often young people replace or supplement advice from chatbots with professional care.
Researchers aimed to fill this knowledge gap by establishing baseline estimates of the use of artificial intelligence for mental health purposes. They sought to determine whether this behavior was prevalent among a nationally representative sample.
“There has been considerable discussion about the potential of artificial intelligence to provide emotional support to both adults and children. However, there is limited nationally representative data on the number of young people who self-report using artificial intelligence for mental health advice when they are feeling sad, angry or nervous,” said study author Jonathan Canter, senior policy researcher at Randland.
The researchers designed a cross-sectional survey of young people between the ages of 12 and 21. Data collection took place between February and March 2025. Researchers used the American Life Panel and Ipsos’ KnowledgePanel to recruit participants. These panels use probability-based sampling techniques to ensure that the groups accurately reflect the broader population of U.S. households.
The final sample consisted of 1,058 respondents out of more than 2,000 individuals contacted. This group included diverse backgrounds: 51% were white, 25% were Hispanic, and 13% were black. The researchers weighted the survey data to create statistics that generalized to the population of English-speaking U.S. youth with internet access.
The survey asked participants if they had ever used generative artificial intelligence tools. Specific examples provided to respondents included ChatGPT, Gemini, and My AI. The researchers avoided clinical jargon so that even the youngest participants could understand the questions. Instead, we asked whether respondents had ever used these tools for advice or help when they were “sad, angry, or nervous.”
The analysis revealed that approximately 13.1% of respondents used generative artificial intelligence for mental health advice. When applied to the national population, approximately 5.4 million adolescents and young adults seek emotional support from chatbots.
“The main surprise was the proportion of adolescents who used artificial intelligence when they felt sad, angry, or nervous,” Kanter told SciPost.
The data showed clear differences in usage rates by age. Usage rates among youth ages 12 to 17 were lower than the overall average. However, the prevalence almost doubled in young adults. According to the survey, 22.2 percent of respondents aged 18 to 21 reported using these tools for mental health advice.
The frequency of use suggests that this is more than just a novelty for many users. Of those who reported consulting an AI for emotional support, 65.5% said they did so monthly or more frequently. This repeated engagement means an ongoing reliance on technology to cope with difficult emotions.
Participants generally viewed the advice they received positively. According to the survey results, 92.7% of users find artificial intelligence responses to be somewhat or very helpful. This high level of satisfaction may reinforce the behavior and encourage continued use of the technology even in the face of negative emotions.
“I think it’s important to recognize that young people are interacting with artificial intelligence and are relying on these tools when they feel sad, angry, or anxious,” Kanter said. “Not only do they use these tools frequently, but they also find the advice they receive helpful.”
Despite the overall positive reception, researchers found evidence of demographic disparities. Black respondents were significantly less likely than non-Hispanic white respondents to say the advice was helpful.
This finding raises questions about the cultural competency of current models of artificial intelligence. This suggests that the datasets used to train these systems may not adequately reflect or understand the experiences of diverse populations.
“It’s important to emphasize that black youth are less likely to report that the advice they received was helpful,” Kanter said. “Our study did not identify the reason for this difference. Future studies should explore this finding and seek to better understand it.”
The high usage rate can be attributed to the low barrier to entry. Artificial intelligence chatbots are usually free or low-cost and readily available.
These tools provide a safe haven for youth who feel stigmatized by traditional counseling or who cannot afford counseling. The anonymity of chatbots can lead users to reveal emotions they would like to hide from human therapists.
However, researchers say this trend comes with significant risks. Currently, there are few standardized benchmarks to assess the quality or safety of mental health advice generated by artificial intelligence. Datasets used to train large language models are often opaque, making it difficult for experts to assess potential bias and inaccuracy.
Concerns about the reliability of these systems are not theoretical. The release of the findings comes as OpenAI faces legal challenges that claim its products have had harmful consequences for some users. The potential for these systems to provide inaccurate or inappropriate advice remains a significant issue for developers and health authorities.
As with all research, there are limitations that should be considered. The sample size for the 18-21 age group was relatively small, with 147 respondents. This limited number means that specific estimates for this subgroup have a wide margin of error and should be interpreted with some caution. Additionally, this study relies on self-reported data, which depends on the accuracy and honesty of participants’ memories.
The survey did not collect information on whether respondents had been diagnosed with a mental health condition. It is unclear whether the users turning to artificial intelligence are those with serious clinical needs or those experiencing temporary emotional fluctuations. The survey also does not capture the specifics of advice sought or provided. Therefore, it is impossible to assess whether the guidance provided by chatbots is clinically appropriate.
“These results should not be interpreted as a causal relationship,” Kanter noted. “Our goal is simply to describe current usage patterns. Further empirical research is needed to understand the relationship between adolescents’ use of artificial intelligence and emotional well-being.”
Looking forward, “I think we should continue to include similar survey questions to track trends in how youth use artificial intelligence,” Kanter said. “It is also important to understand health care providers’ perspectives on incorporating artificial intelligence into youth mental health care delivery.”
The study, “Using Generative AI for Mental Health Advice Among U.S. Adolescents and Young Adults,” was authored by Ryan K. McBain, Robert Bozick, Melissa Diliberti, Li Ang Zhang, Fang Zhang, Alyssa Burnett, Aaron Kofner, Benjamin Rader, Joshua Breslau, Bradley D. Stein, Ateev Mehrotra, and Lori Uscher Pines. Jonathan Cantor and Hao Yu.
