UMich students and staff discuss the use of AI for treatment

Applications of AI


As generative artificial intelligence grows in popularity, more University of Michigan students are using the technology for counseling and therapeutic advice. UM professors and students will discuss the safety and effectiveness of these interactions and whether AI can completely replace in-person counseling advice.

Dan Adler is a newly appointed tenure-track assistant professor of computer science and engineering who researches healthcare applications of AI. In an interview with The Michigan Daily, Adler said the rise in mental health diagnoses in the United States has resulted in longer waiting lists for in-person counseling, such as college counseling and psychological services, leading to increased use of AI in treatment.

“Many people don’t have access to professional mental health care or face challenges finding affordable mental health care,” Adler said. “So when people feel isolated and have no way to reach out to professionals to support their needs, it’s no surprise that they turn to technology that is easily accessible and available 24/7 for their mental health needs.”

There are many differences between a conversation between an AI and an individual compared to a conversation with a therapist. Dr. Stephen F. Taylor, chair of the School of Medicine’s psychiatry department, told the Daily that most people use the technology to ask simple, low-risk questions rather than undergoing a full treatment session.

“Most of those questions are probably relatively low-intensity, like, ‘How should I deal with my girlfriend, what should I say to her?’ and the kind of questions you would tell a friend,” Taylor said. “But while the chatbot is always there, sometimes your friend isn’t.”

A student, who requested anonymity due to the stigma associated with mental health, spoke to the Daily about his experience using AI in treatment. In this article, we will refer to them as Alex. Alex said they are using the technology in place of traditional counseling services because they live outside of Michigan and it is difficult to coordinate meetings during school breaks. They also said they were unable to utilize in-person counselors due to time constraints.

“I only use AI when I feel like I have a lot of emotions to deal with, when I feel like I need to talk to someone about it, but it’s really inconvenient, or when I know the people I trust are busy right now,” Alex said. “So instead of bothering them, I just go to the AI ​​and confide everything.”

When Alex experiences these emotions, he discusses them on ChatGPT or Google Gemini. The AI ​​chatbot listens to the patient’s situation, asking follow-up questions, just like an in-person therapist.

“Maybe after 30 minutes to an hour, depending on how I’m really feeling and how I feel at the end of that session, I take that advice into consideration. And I think I really use that to understand the other person’s perspective,” Alex said. “It helps you understand your emotions and why you were hurt.”

But Emily Mower Provost, professor and senior vice chair of computer science and engineering, told the Daily that AI users still need to question the guidance these chatbots provide.

“If you’re worried about how other people will interpret what you’re saying, and you want to ask questions, but you’re worried about the human reaction or feeling judged, I think things like AI are very appealing,” Provost says. “We still have to be very careful about what recommendations it’s making. It’s not a human therapist. It doesn’t have the kinds of responsibilities that a human therapist would have.”

Provost said another issue stems from developers training AI with information that stigmatizes mental health issues, so the chatbot’s responses can reflect similar sentiments.

“Most existing systems just ingest a huge amount of data,” Provost said. “And if you see somewhere in the data, or over and over again, examples of mental illness being stigmatized in the data, one of the things you learn is to create text that has that flavor. It can talk very critically about mental health conditions. It can suggest things like segregating people and removing access.”

Taylor said AI could also default to consenting to users rather than push back against flawed assumptions, which could particularly put mentally ill people who use technologies like ChatGPT at risk.

“They are designed based on reinforcement learning with human feedback,” Taylor said. “They’re conditioned to say things that make people happy. So I think people who are prone to delusions are amplified by the phenomenon.”

Alex acknowledged that the AI ​​they use often matches their emotions. However, they are not afraid to push back if they know that the chatbot is endorsing incorrect inferences.

“If I share my perspective for a significant amount of time, you’ll be convinced that I have a point,” Alex said. “But obviously, nothing is black and white. Everything is very gray. So I feel like I always have to think, ‘I don’t think this is right.'”

Taylor said that despite concerns about the use of AI therapy, conversations with AI can still make people feel better, and the positives may outweigh the negatives.

“If a person is lonely and they have a relationship that makes them feel good, can we be sure that it’s necessarily going to be bad?” Taylor said. “I say no. I don’t know, just because there’s a risk doesn’t mean it’s bad. We always approach treatments with risks and balance the risks and benefits.”

To alleviate the problems associated with general-purpose AI like ChatGPT, public health student Aarush Goel helped create WanderWell, an AI chatbot specifically focused on helping people with substance abuse disorders. Goel told the Daily that he developed the app last year with a group of Public Health 555: Chatgpt/AI and Public Health students and is currently working to expand it across the university.

“Many students not only face academic pressures, but also have to balance numerous work and life responsibilities, which is a major contributing factor to the increase in substance abuse disorders in our communities,” Goel said. “Specifically, I think AI can play a big role in actually being a positive resource, and this particular chatbot could have a big impact on students asking for help without feeling biased.”

However, Goel said Wonderwell cannot replace in-person treatment and barriers are in place to ensure it does not exceed the app’s capabilities.

“When students entered specific responses imagining specific symptoms of suicide or other more extreme cases of psychiatric symptoms, we specifically instructed the AI ​​chatbot to refer them to an in-person therapist or to seek out a crisis hotline through the University of Michigan,” Goel said. “Through that, we have tailored it to address more general mental health needs, but focused on making sure that in more extreme cases, especially emergency situations, the results cannot be replicated without the right expertise.”

Provost said the best use of AI in the mental health field is to complement, rather than replace, in-person counseling.

“It’s true that not everyone has access to care, and it’s very tempting to say that AI can close that gap by allowing people to get care that they wouldn’t otherwise have access to,” Provost said. “That’s not necessarily true, but it’s very important to realize that where we are right now is not at all suitable for such a replacement.”

Daily Staff Reporter Dominic Apap can be contacted at: dapap@umich.edu.



Source link