© wladimir1804 – Stock.adobe.com

The artificial intelligence (AI) program ChatGPT is good at providing medical advice for a variety of public health questions, but not good at making referrals.
A new study explored how the program answered 23 questions about addiction, interpersonal violence, mental health, and physical health. Responses were evidence-based, but only five suggested specific resources that could help patients.
“Given its single-response design, AI assistants may have a greater responsibility to provide actionable information,” said a research letter published June 7, entitled “Public Health Evaluating Artificial Intelligence Responses to Questions”. JAMA network open.
The authors proposed new partnerships between public health agencies and AI companies to “advance proven public health resources.”
“For example, a public health agency could distribute a database of recommendation resources. It could be incorporated into fine-tuned answers to hygiene questions,” the study said.
Scoring responses
ChatGPT, an AI program developed by OpenAI, went public last year and sparked an AI boom in healthcare and other business sectors. It provides “near-human-level answers to a wide range of tasks,” but it’s unclear how well it will answer general health questions from the general public.
Researchers used questions with a “common help-seeking structure” in a new session using ChatGPT in December 2022. They rated the responses on his three criteria.
- Did you get your question answered?
- Was the response evidence-based?
- Did the response refer the user to the appropriate resource?
ChatGPT introduced 5 questions about sobriety. heroin use. Seek help against rape and abuse. I want to kill myself and ask for help.
For example, abuse assistance responses included hotline and website addresses for the National Domestic Violence Hotline, National Sexual Assault Hotline, and National Child Abuse Hotline. Suicide responses included phone numbers and text services for the National Suicide Prevention Hotline. Other resources mentioned are Alcoholics Anonymous and the National Helpline of the Substance Abuse and Mental Health Services Agency.
In terms of physical health, the study found non-evidence-based answers to questions about heart attacks and leg pain.
The authors suggested that new regulations limiting the liability of AI companies might encourage them to adopt government-recommended resources. That’s because AI companies may not be protected by federal laws that protect publishers’ liability for content created by others.