Cork Technical Experts' Warnings regarding the Use of AI for Problem Treatment

Applications of AI


The Cork professor warns of the dangers of people turning to AI chatbots on behalf of professional therapists for mental health support.

Gregory Provan is a professor of computer and information technology at the University of Cork.

His comments come with an increase in people using AI chatbots for mental health therapy.

A recent survey shows that 52% of young adults in the US feel comfortable talking about mental health through AI chatbots.

The makers of ChatGpt are changing the way they respond to users who show mental and emotional distress following legal action from the family of 16-year-old Adam Lane in California.

Open AI acknowledged that the system could be “potentially short” and said it plans to install “more powerful guardrails on sensitive content and dangerous behavior” for users under the age of 18.

Provan said echo The involvement of AI apps in areas such as mental health is particularly a very sensitive issue, based on research into the field of information technology.

“The moment a task becomes more challenging, large language models tend to cause errors. This is for programming where the right and wrong solutions are known.

“When I extend it to something as complex as medication, I have a really big problem. Doctors make very subtle choices, so once they get back the test results, they rely on knowing their previous patients.

Provan mentioned above said that the involvement of AI apps in areas such as mental health is a very sensitive issue.
Provan mentioned above said that the involvement of AI apps in areas such as mental health is a very sensitive issue.

“Mental health is even more complicated, and I would never recommend this.

“During the summer, I used language models to help me plan my trip to the UK. It made some serious errors. In one case, I said it would take two and a half hours to drive, when it was actually close to four hours.

“For these kinds of simple things, you can recover from these errors. But if you're making an error with mental health advice, it's very dangerous.”

Provan added that AI companies such as ChatGPT could help improve people using chatbots to take care of their mental health.

“I think it's the way we use them as individuals. If you have the right skepticism and view them as potential sources that could be wrong, I think the situation is fine.

“But if there is an opinion that the information they provide is guaranteed to be correct, or if they are overly dependent on them, I think it is a misuse.

“If anything you find serious about health or mental health, you need to present it to the right health expert or mental health expert.

“Obviously, these companies have these language models as products that can be used in a variety of ways. We need to be very careful about how they are used.

“I think doctors are fine as long as they don't rely too much on these tools.”

Childline's service manager Aoife Griffin said that more young people and children are using Chat GBT for mental health advice based on their experience and callers.

“There is good advice there out there in terms of techniques to help them say if they're having a panic attack or not.

“But for a form of treatment, our advice is to go to a mental health professional who knows you are certified.

“Therapy is very subtle and I think you need to know the background of a person. It's not a really good idea to be provided in a safe environment and chatting GBT for that kind of intervention.”

Griffin said that while things like Chat GPT could be useful when it comes to general mental health advice, it's not enough to be specific to patients or work on a case-by-case basis.

“They're not going to know you. There's no treatment relationship there. You'll miss the tone, body language, someone's general attitude, and more.

“Chat GPT, they don't know who they are, they don't know their history. It cannot differentiate them from people in mild pain or crisis.

“That's really growing for us and the people who work with our kids and young people.”

Griffin continues. “The thing is to guarantee these platforms. We all use the internet and we understand that there is certainly a place. But it is as safe as possible.

“We have to be very careful, and I think kids and young people will be accessing and using it for support.

“Or the right channel for anyone in terms of getting GP and support.”

Liam Quaid, Social Democrat TD in Cork East, said he has worked as a clinical psychologist and has worked with the community's adult mental health team for many years, and most recently worked in hospital outpatient services.

“The essence of therapy is deep human involvement. As famous psychoanalyst Neville Symington said, “Are the core of one person, the core of another person.”

“AI therapy is superficial, robotic and vapid by its nature. The depth of the treatment relationship should not be reduced to machine-generated algorithms or comfortable scripts.

“Real treatment is rooted in trust, presence and the ability to stand up to the uncomfortable truths with others.

“AI promises immediate validation and efficiency, but it will lead to people becoming lonely, more isolated and unable to navigate the complexities of struggle.”

A paper published in the Journal of Medicine, Surgery, and Public Health (Olawade et al) in 2024 found that AI has the potential to revolutionize mental health care, but responsible ethical implementation is essential.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *