Without stronger federal regulations, some states have begun regulating apps that offer AI “treatment” in order to turn to artificial intelligence for mental health advice.
But the laws that were all passed this year should not completely address the rapidly changing landscape of AI software development. App developers, policymakers and mental health advocates also say the patchwork that results from state law is not sufficient to protect users or hold harmful technology creators accountable.
“We're looking forward to seeing you in the future,” said Karin Andrea Stephan, CEO and co-founder of the mental health chatbot app Earkick.
___
Editor's Notes – This story contains a suicide discussion. If you or someone you know need help, the US National Suicide and Crisis Lifeline is available by calling or texting 988. There is also an online chat at 988Lifeline.org.
___
State laws take a variety of approaches. Illinois and Nevada have banned the use of AI to treat mental health. Utah has placed certain restrictions on therapy chatbots, including protecting users' health information and requiring them to clearly disclose that chatbots are not human. Pennsylvania, New Jersey and California are also considering ways to regulate AI therapy.
The impact on users varies. Some apps block access in banned states. Others say they haven't made any changes as they are waiting for more legal clarity.
Also, many laws do not cover common chatbots like ChatGpt. ChatGpt is not explicitly sold for treatment, but it is used by countless people for this purpose. These bots attracted lawsuits with horrifying cases where users grabbed reality and took their own life after interacting with them.
Vaile Wright, who oversees health care innovation at the American Psychological Association, agreed that the app could meet the needs and noted the nationwide shortage of mental health providers, high costs of care, and uneven access to insured patients.
A mental health chatbot rooted in science, created with expert opinions and monitored by humans, Wright said, could change the landscape.
“This may be something that will help people before they get into a crisis,” she said. “That's not something you're in the commercial market right now.”
That's why federal regulations and oversight are needed, she said.
Earlier this month, the Federal Trade Commission announced that it would begin enquiries to seven AI chatbot companies, including Instagram and Facebook, Google, ChatGpt, Grok (Chatbot of X), Character.ai and Snapchat parent companies. The Food and Drug Administration will also convened an advisory committee on November 6th to review generative AI-enabled mental health devices.
Federal agencies can consider restrictions on how chatbots are being sold, limit addictive practices, request disclosures from users who are not healthcare providers, require businesses to track and report suicidal thoughts, and require businesses to provide legal protection to people reporting bad practices, Wright said.
From “companion apps” to “AI therapists” to “mental wellness” apps, the use of AI in mental health care is diverse and difficult to define, and it is not only about writing the law.
It led to a variety of regulatory approaches. For example, some states are designed solely for friendship, but don't challenge mental health care. Illinois and Nevada laws prohibit products that provide full mental health care and claim to threaten up to $10,000 in Illinois and $15,000 in Nevada.
However, even a single app can be difficult to categorize.
Earkick's Stephan said there are still many things that are “very muddy” about Illinois law, for example.
Stephen and her team initially kept calling chatbots that looked like the therapist cartoon panda. However, when users started using words in reviews, they accepted the term so that the app would appear in searches.
Last week they retreated again using treatment and medical terminology. Earkick's website described chatbots as “your empathic AI counselor to support your mental health journey,” but now they are “chatbots for self-care.”
Still, “We haven't diagnosed it,” Stefan insisted.
Users can set a “panic button” to call out their trustworthy loved ones if they are in crisis, and chatbots can “nudge” users and look for therapists if their mental health deteriorates. But Stephan said it wasn't designed to be a suicide prevention app, and if someone tells the bot about the idea of self-harm, police won't be called.
Stephen said he is happy that people are looking at AI with critical eyes, but is concerned about the nation's ability to keep up with innovation.
“The speed at which everything is evolving is huge,” she said.
Other apps quickly blocked access. When Illinois users download the AI therapy app ASH, the message urges lawmakers to send emails, claiming that “false legislation” banned apps like ASH, “intention is to freely regulate unregulated chatbots.”
ASH spokesman did not respond to multiple requests for interviews.
Mario Toreto Jr., secretary to the Illinois Department of Finance and Specialist Regulation, said that the ultimate goal is to ensure that a licensed therapist is taking the only therapy.
“Therapy is more than just a word exchange,” Toreto said. “It requires empathy, clinical judgment, ethical responsibility, AI can't really replicate right now.”
In March, the Dartmouth University-based team announced the first known randomized clinical trial of a generator AI chatbot for mental health treatment.
The goal was to have a chatbot called Therabot and treat people diagnosed with anxiety, depression, or eating disorders. They were trained with vignettes and transcripts written by the team to explain evidence-based responses.
This study found that users rated Therabot like the therapists, and that symptoms were significantly lower after 8 weeks compared to those who were not using it. All interactions were monitored by humans who intervened when chatbot responses were harmful or not evidence-based.
Nicholas Jacobson, a clinical psychologist whose lab leads the research, said the results showed early promise, but greater research is needed to show whether terabots work for many people.
“The space is so dramatically new, I think the field needs to go a lot more careful about what's going on right now,” he said.
Many AI apps are optimized for engagement and are built to support everything the user says, rather than challenging people's thoughts like therapists. Many people walk along the lines of dating and treatment, and boundary therapists of intimacy may not be ethically.
The Therabot team tried to avoid these issues.
The app is still under testing and is not widely available. But Jacobson is worried about what a strict ban means, as developers take a cautious approach. He said Illinois does not have a clear route to provide evidence that the app is safe and effective.
“They want to protect people, but the traditional system now really fails,” he said. “So trying to stick to the status quo is not something you really should do.”
Regulators and law advocates say they are open to change. But today's chatbots are not the solution to a shortage of mental health providers, Kyle Hillman said.
“Not everyone who feels sad needs a therapist,” he said. But for people with real mental health issues and suicide ideas, they say, “I know there's a labor shortage, but here's the bot.” That is such a privileged position. ”
___
The Associated Press School of Health Sciences is supported by the Howard Hughes Medical Institution's Department of Science and Education and the Robert Wood Johnson Foundation. AP is solely responsible for all content.
