Stanford University research warns that AI chatbots lack mental health support

AI News


Please listen to the article

AI chatbots like ChatGPT are widely used to support mental health, but new Stanford-led research warns that these tools often fail to meet basic treatment standards and can put vulnerable users at risk.

A study published at the ACM conference on equity, accountability and transparency in June found that popular AI models, including Openai's GPT-4o, can examine harmful delusions, miss warning signs of suicide intent, and show bias against people with schizophrenia or alcoholism.

In one test, the GPT-4o listed the tall bridges in New York for those who just lost their jobs, ignoring the possibility of suicide. In other cases, it involved the user's delusions rather than challenged them, and violated the Crisis Intervention Guidelines.

read more: Is Hollywood warmed up to AI?

The study also found that commercial mental health chatbots like Character.ai and 7 cups, despite being used by millions, were worse than the base model and lacked regulatory oversight.

Researchers reviewed treatment criteria from global health organizations and developed 17 criteria to assess chatbot responses. They concluded that even the most advanced models are often lacking, demonstrating “sycophancy.” This tends to validate user input regardless of accuracy or risk.

Media reports link chatbot validation to dangerous real-world outcomes, including fatal police shootings involving a man with schizophrenia and another case of suicide after the chatbot encourages conspiracy beliefs.

Read again: Glock ai soon arrives at the Tesla car and checks out Elon Musk

However, the authors of this study warn against looking at AI therapy in black and white terms. They acknowledge that human therapists are still involved, particularly in supporting roles such as journaling, intake research and training tools.

Lead author Jared Moore and co-author Nick Haber highlighted the need for stricter safety guardrails and more thoughtful deployments, warning that being trained to please chatbots cannot always provide the demands of real-life check therapy.

As AI mental health tools continue to expand without supervision, researchers say the risk is too big to ignore. Technology may be useful, but only if used wisely.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *