Mental health experts are raising concerns after a study found that the latest version of OpenAI’s chatbot, ChatGPT-5, can provide misleading and sometimes dangerous advice to people facing a mental health crisis. A recent study conducted by King’s College London (KCL) and the British Association of Clinical Psychologists (ACP) in partnership with the Guardian newspaper found that when chatbots are faced with signs of psychosis, delusions and suicidal ideation, rather than challenging them or guiding users to critical intervention, they may reinforce harmful beliefs.
The researchers tested the chatbot using a series of role-play scenarios designed to mimic real-life mental health emergencies. During these experiments, experts reportedly posed as people experiencing a variety of symptoms, including suicidal teens, psychotic patients, and people with obsessive-compulsive symptoms. However, when they began conversations with AI chatbots that presented visible symptoms of the disease, they found that ChatGPT-5 often “failed to confirm, validate, and challenge” the delusional thoughts being presented.
For example, in one scenario, a hypothetical user claimed to be able to weave between cars and encounter a traffic jam. Rather than issue a safety warning or prompt you to seek professional help immediately, the AI responded with “next level alignment with your destiny.” Researchers warn that such reactions can encourage risky behavior in real-world situations.
The chatbot also tackled fanciful ideas in other tests. When one character declared themselves “the next Einstein” and described a fictional invention called “DigitoSpirit,” ChatGPT-5 reportedly responded playfully and even offered to create a Python simulation to support the user’s envisioned project. Psychologists involved in the study say this is deeply worrying and that indulging in hallucinations and delusions can increase distress and delay essential interventions. And while the model provided more reasonable guidance in mild cases, clinicians cautioned that even seemingly helpful answers should not be mistaken for true clinical support.
Meanwhile, the clinicians involved in the study also found that ChatGPT-5 “struggled considerably” with complex symptoms, missing important cues and, in some cases, reinforcing harmful thoughts. Clinical psychologist Jake Eastoe said machine learning systems rely heavily on “reassurance-seeking strategies” and are inappropriate for severe mental health conditions. “Failed to identify significant warning signs; mentioned mental health concerns only briefly and stopped mentioning them at the patient’s direction. Instead, engaged in delusional beliefs and inadvertently reinforced the individual’s behavior.”
Following the release of the report, researchers are calling for renewed calls for increased oversight and regulation. Experts argue that without clear standards, AI tools risk being used in situations they simply weren’t designed to address, particularly when safety or risk assessment is involved.
Meanwhile, an OpenAI spokesperson told the Guardian that ChatGPT is working with mental health experts around the world to improve the way it recognizes distress and directs users to appropriate resources. “We know that people sometimes turn to ChatGPT in sensitive moments. Over the past few months, we’ve been working with mental health professionals around the world to help ChatGPT better recognize signs of distress and direct people to professional help.”
– end
