As the popularity of artificial intelligence (AI)-based chatbots like ChatGPT continues to increase, a growing number of people are turning to these tools for mental health support. With an estimated one-half of individuals with mental health disorders reportedly going untreated,1 AI chatbots may hold promise for complementary support and other applications in the mental health space. However, many experts have cited concerns regarding the use of these tools in treatment, and they caution against their use as a substitute for a human clinician.
First RCT of a Generative AI Chatbot in Mental Health Care
Among the emerging research exploring the use of AI chatbots in mental health care, researchers at Dartmouth College conducted the first randomized controlled trial investigating the effects of a fully generative (rather than rule-based) AI chatbot for mental health treatment. In the study, published in March 2025, adult participants (n=210) with major depressive disorder (MDD), generalized anxiety disorder (GAD) or clinically high risk for feeding and eating disorders (CHR-FED) were randomized to a 4-week chatbot intervention or a waitlist control group.2
The primary study outcomes were symptom changes at the end of the 4-week intervention and at the 8-week follow-up assessment, as compared to baseline. To evaluate symptoms of depression, anxiety, and weight and shape concerns in participants, researchers used the Patient Health Questionnaire-9 (PHQ-9), the Generalized Anxiety Disorder Questionnaire for DSM-IV (GAD-Q-IV), and the Stanford-Washington University Eating Disorder (SWED) screen, respectively.
At 4 weeks, patients in the chatbot intervention group demonstrated significant larger mean reductions in symptoms of MDD (−6.13 vs −2.63), GAD (−2.32 vs −0.13), and CHR-FED (−9.83 vs −1.66) compared to the waitlist group.
“
AI tools can supplement but should never substitute human therapists, especially when it comes to moderate-to-severe mental health concerns.
At the 8-week follow-up, significantly greater reductions were also observed in the intervention group vs the waitlist group in symptoms of MDD (−7.93 vs −4.22), GAD (−3.18 vs −1.11), and CHR-FED (−10.23 vs −3.70).
Among secondary outcomes examined in the trial, the chatbot showed high user engagement and ratings of “therapeutic alliance” that were comparable to alliance with human therapists.
First Author Insights
In the future, the use of AI chatbots in mental health treatment “could allow the expansion of cost-effective, evidence-based, high-fidelity, personalized treatments to people who would have otherwise gone without treatment,” explained the first author of the trial, Michael V. Heinz, MD, assistant professor at the Geisel School of Medicine at Dartmouth and postdoctoral fellow at the Artificial Intelligence and Mental Health Lab at the Dartmouth College Center for Technology and Behavioral Health.
AI chatbots “could also serve as a bridge for people on a waitlist to get mental health care, and in some cases, could complement weekly in-person therapy by providing therapeutic guidance throughout the week that complements what they are working on with their therapist,” he continued.
Dr Heinz noted the need to establish safety and efficacy benchmarks that would “allow for a standardized evaluation of new models and for early identification of models which may not be in the best interest of the user’s mental health.” He also pointed to the need for clinical trials to compare the performance of AI-driven treatments to conventional mental health treatment approaches and to elucidate the patient populations and mental health challenges for which AI-driven models are most appropriate.
To that end, “We are currently working on some pilot studies exploring Therabot’s use in new populations, including individuals with cannabis use co-occurring with anxiety and depression,” he said.3
Expert Perspectives
For an in-depth discussion about the current state of AI applications in mental health, potential benefits and drawbacks of these approaches, and remaining gaps in this realm, Psychiatry Advisor interviewed Shannon Wiltsey Stirman, PhD, professor in the Department of Psychiatry and Behavioral Sciences at Stanford University in California, and Rakesh K. Maurya, PhD, assistant professor in the Department of Leadership, School Counseling, and Sport Management at the University of North Florida in Jacksonville.
What is the current state of things regarding the use of AI chatbots in mental health treatment? Is it known how common such usage is and how quickly it’s growing?
Dr Wiltsey Stirman: We know that people are using general AI platforms like ChatGPT for mental health-related concerns. A recent survey found that 24% of a nationally representative sample reported using large language models for mental health-related purposes – for example, to seek information or even as a substitute for therapy.4
In terms of use in mental health treatment settings, I think they haven’t been rolled out as widely because there are issues such as HIPAA compliance that haven’t been worked out. However, there are dozens of mental health apps that use AI chatbots coming out, and very few have been tested. In the recent study by Heinz et al, the Therabot chatbot outperformed a wait-list control,2 but I think it will take some time before we see these tools rolled out widely in treatment settings.
Even when these technologies are used, there are a lot of things they currently can’t do as well as therapists, so the recommendation is that therapists remain very much in the loop and monitor how their patients are using them, whether the responses they get are safe and appropriate, how effective they seem to be for each individual, and whether people are engaging with them at appropriate levels – ie not dependent on them or using them too much, but engaging enough for them to be helpful.
Dr Maurya: Right now, AI is being used in mental health treatment in 2 main ways. First, it’s helping clinicians with documentation tasks.5 For example, platforms like Eleos Health can automatically generate progress notes and provide real-time clinical insights by analyzing therapy sessions. This kind of AI support really helps reduce the administrative burden on clinicians and allows them to spend more time focusing on their patients instead of paperwork.
The second area is self-help tools including AI chatbots like Wysa, Woebot, and Mindspa.6 These are designed for individual users and offer 24/7 support. They guide users through evidence-based techniques like cognitive behavioral therapy (CBT) and dialectical behavior therapy (DBT), help with mood tracking, mindfulness, and stress management, and provide a safe, anonymous space for daily check-ins. They’re not a replacement for therapy, but they can offer meaningful support between sessions or for people who may not have access to a therapist.
Clinicians are also beginning to use tools like ChatGPT to support psychoeducation. I did a study where we tested the reliability of psychoeducational information produced by ChatGPT, and we found that the responses were generally clear, accurate, and ethically sound, especially when users provided enough context in their questions.7 So, it’s showing real promise as a tool to support clinical care – not as a therapist, but as a co-pilot that can help explain concepts or generate ideas.
In terms of how common all of this is, it’s still emerging but growing. More clinicians and patients are experimenting with these tools, and there’s a lot of curiosity about how to use them responsibly and effectively. So, while the full picture of adoption isn’t entirely clear yet, the momentum is there.
What are some of the potential benefits and drawbacks of using AI chatbots in mental health care, especially as a substitute for a human therapist? And what did your recent findings add to our understanding of this topic?
Dr Wiltsey Stirman: There are some findings emerging, including from research that colleagues and I have done, that AI chatbots don’t respond in the same way therapists do. They are more verbose, they jump right to problem solving and advice giving before gathering enough information to make a well-informed decision about what intervention might be most appropriate, and they don’t always detect or manage risk well. They do seem to do a fairly good job at making empathic and validating responses.4
Currently, AI chatbots don’t have the technological capability to form the kind of case conceptualizations that therapists do. I have a colleague, Philip Held, who developed a chatbot called Socrates that asks guided questions to help people draw some of their own conclusions about specific situations or challenges they are having.8 But that required training a chatbot specifically to do that. The ones that are generally available right now won’t function the same way a therapist would, and the chatbots that are coming on the market – for example, in the app stores – to function as therapists haven’t been carefully tested.
When we can be sure that AI therapy chatbots are safe, private, effective, and unbiased, advantages would certainly include the availability and access. There are people who can’t access treatment for a variety of reasons including cost and convenience, and AI chatbots could certainly also help support people between sessions – not in a way that makes them dependent, ideally, but to help if they are feeling stuck or having trouble putting what they are learning in therapy into practice between sessions. So, I do think there can be benefits to them, but we need to be sure that we’re testing them carefully to make sure they are safe and effective.
Dr Maurya: AI chatbots like ChatGPT are showing promise in mental health care, particularly in the realm of psychoeducation and general well-being. Our research found that tools like ChatGPT can provide accurate, clear, relevant, and ethically sound information across a wide range of mental health topics.7 For individuals with limited access to mental health resources due to financial, geographical, or cultural barriers, AI can serve as a valuable first step in exploring self-help strategies and learning about mental health.
That said, it’s essential to approach these tools with caution. While AI chatbots can simulate empathic language and offer supportive guidance, they lack the capacity to truly understand emotional nuance or build the kind of therapeutic alliance that is central to effective therapy. They are not trained to handle crisis situations or complex mental health disorders, and they do not replace the role of a qualified mental health professional.
The bottom line is this: AI tools can supplement but should never substitute human therapists, especially when it comes to moderate-to-severe mental health concerns. These chatbots are best used as adjunct tools – for example, helping patients reflect between sessions or offering coping tips when a therapist isn’t immediately available. But we must continue to educate users about their limitations, especially around data privacy, contextual accuracy, and cultural relevance.
What actions and resources would you recommend for clinicians to stay adequately informed and prepared to discuss the use of AI chatbots for mental health purposes with their patients, and what are a few key points they should emphasize on this topic with patients?
Dr Wiltsey Stirman: Some professional organizations like the American Psychiatric Association and the American Psychological Association have been putting out guidance and information about AI that can be very helpful.9,10 I think looking for workshops or opportunities to learn and understand how large language models work can be helpful in having informed discussions with patients. A colleague and I just did a workshop on this with the Association for Behavioral and Cognitive Therapies. There are also some professional organizations that put out podcasts and are covering AI chatbots.
With patients, it will be important to follow the research on AI chatbots and encourage scrutiny of their level of safety – for example, not providing inappropriate or harmful suggestions or responses11 and having the ability to detect and respond appropriately to risk – as well as the data privacy and effectiveness associated with these tools and whether they have been developed to provide culturally appropriate responses. A lot of products are being put out there as wellness tools, which means they haven’t been tested carefully and aren’t being regulated as mental health treatment tools. It can be challenging to read through terms of service for these products, but it’s important to be informed about issues like how data are used and stored.
Dr Maurya: I think the first step for clinicians is to stay engaged with current research on how AI is being used in mental health care especially in areas like psychoeducation, patient support, and clinical documentation. We’re seeing rapid developments in this space, and clinicians who stay informed can better guide their patients. Attending professional development workshops, following the ethical guidelines from professional associations, and participating in conversations between technologists and mental health professionals are all excellent ways to build foundational knowledge on these topics.
In practice, I encourage clinicians to think of AI tools like ChatGPT as co-pilots, not replacements. For example, AI can be a useful aid in generating psychoeducational materials, summarizing notes, or even brainstorming treatment strategies. But when it comes to working with patients, it’s important for clinicians to set clear expectations. We need to let patients know that while AI chatbots can support general well-being and self-reflection, they are not a substitute for therapy, especially for those dealing with moderate-to-severe mental health issues. These tools don’t offer personalized treatment or crisis support.
Clinicians should also talk to patients about privacy concerns. These platforms aren’t designed for sharing sensitive personal information, so part of our role is helping patients use these tools thoughtfully and responsibly, maybe as a supplement between sessions or when a therapist isn’t immediately available, but always with an understanding of their limitations.
What are some of the most pressing ongoing needs regarding the use of AI chatbots in mental health care?
Dr Wiltsey Stirman: I think some of the most pressing needs are the development of standards for evaluation of these products before they are used as mental health interventions as well as transparency around issues like safety and privacy, how the models are trained, and how much testing has been done to determine whether they really work. I think it’s also critical that these models are developed in true collaboration with people who have clinical expertise, with individuals who have lived experience with mental health treatment, and with policymakers to ensure that what’s developed is safe and effective and that people’s data won’t be used in ways that they wouldn’t consent to.
It seems straightforward to train a model based on available books or therapy manuals, and at first what comes out might look pretty good, but when you start testing further you can see where they aren’t always performing as well as they need to. We know it takes more than reading a book to do good therapy – there is a lot of nuance and need for tailoring, and that’s what people with clinical expertise need to bring to the table to support the develop of technologies that won’t do harm and can actually help.
Dr Maurya: I think one of the most pressing needs regarding the use of AI in mental health is recognizing the clear boundary between what AI can and cannot offer. AI chatbots can be helpful in providing information, supporting psychoeducation, or even helping people reflect on their thoughts, but they cannot replace human connection.
There’s a large body of research in psychotherapy that shows that the real change in therapy doesn’t happen primarily because of the techniques being used. Change happens because of the therapeutic relationship – the trust, the attunement, and the feeling of being seen and understood.12 As humans, we’re often hurt in relationships, and it’s also through relationships that we heal. That kind of connection is something AI can’t authentically replicate, no matter how advanced the technology becomes.
From both a research and educational standpoint, there’s a strong need to help people – especially youth and digital natives – understand the appropriate role of AI tools in mental health care. We need to ask: How are these tools influencing help-seeking behavior? How are people interpreting the support they get from chatbots? And are we, perhaps unintentionally, normalizing the idea that human connection is optional in mental health care?
Clinicians also need more training and support to understand how to responsibly integrate AI into their work. It’s not about accepting or rejecting these tools; it’s about using them wisely, ethically, and with full awareness of their limitations. I really believe that as we move forward, our challenge will be to balance innovation with human connection and ensure that technology complements care, rather than replacing what makes therapy so powerful in the first place.
This article originally appeared on Psychiatry Advisor
