
Recently, a digital rubicon of sorts has crossed in the healthcare space, causing amazement, disgust, and even fear.
Google has launched a number of health initiatives, but none have garnered more attention than an update to its large-scale learning model (LLM) for medicine called Med-Palm, which was first introduced last year.
again: These astronauts are undergoing medical training by playing video games
As you may know, LLM is a form of artificial intelligence fed with vast amounts of data, like all content on the internet before 2021 in the case of the hugely popular ChatGPT. Using machine learning and neural networks, you can spit out confident answers to questions in the blink of an eye, eerily human-like.
bot MD
In the case of Med-Palm and its successor Palm 2, health-focused LLMs were given rigorous health-related information and then made to sit the United States Medical Licensing Examination (USMLE). parents. Consisting of three parts and requiring hundreds of hours of cramming, these exams are notoriously difficult.
Still, the Med-Palm 2 performed with a “specialist” level score of 85% (18% higher than its predecessor), definitely making software coding parents pretend play in the pub that night. .
again: How to Use ChatGPT: What You Need to Know Now
Its peer, generalist LLM ChatGPT, only scored at or near the 60% accuracy passing threshold from the generalist dataset, not the health-only dataset, but that was last year. It’s hard to imagine subsequent versions not passing the exam in the near future.
Biased bots and human bias
But not everyone is convinced that these newly minted medical geniuses are good for us.
Just a few months ago, Google suffered a humiliating setback when its newly born bot, Bard, hacked its $100 billion market cap by incorrectly answering basic questions about telescopes. .
The accident sparked an ongoing debate about the accuracy of AI systems and their impact on society.
again: ChatGPT vs. Bing Chat: Which AI Chatbot Should You Use?
A growing concern is how racial bias tends to proliferate within the commercial algorithms used to guide the healthcare system. In one infamous circumstance, algorithms within the U.S. health care system assigned the same risk to far sicker black patients than whites, reducing the number of patients selected for special care by more than half. .
From emergency rooms to surgery to preventative medicine, human traditions of prejudice against women, the elderly, and people of color—the prejudices against those left behind—have been effectively forced upon our machine wonders. rice field.
The Earthly Reality of Broken Systems
Nonetheless, the U.S. health care system has collapsed so severely that at least 30 million Americans are uninsured and tens of millions struggle to access basic care. Because of this, worrying about prejudice may not be the luxury you can afford.
For example, consider teenagers. They tend to suffer from many things, with early obesity and puberty followed by sex, drugs and alcohol.
again: What is Auto-GPT? What you need to know about the next powerful AI tool
According to the Centers for Disease Control and Prevention (CDC), sadness and hopelessness among teens, including suicidal thoughts and behaviors, increased 40% in the decade preceding the pandemic.
“Suicide rates and depression rates are very high, and this has been going on for a while,” said Kimberly Hogwood, PhD, a psychologist at New York University’s Grossman School of Medicine. “During the pandemic, it’s certainly gotten worse.”
Still, stats show it’s over half Percentage of adolescents currently not receiving any mental health care. From veterans with at least 20 people taking their own lives every day, to the elderly, those who can’t afford high insurance premiums, endless wait times with urgent medical needs, health bots, and commodities like ChatGPT. Even degraded AI can become a lifeline.
again: How to use the new Bing (and how it differs from ChatGPT)
A recent national survey by Woebot, a popular health chatbot service, found that 22% of adults have used the services of an AI-powered health bot. At least 44% said they ditched human therapists entirely and used chatbots exclusively.
there is (always) a doctor
So it’s easy to see why we’re turning to machines for help.
AI Health Bots don’t get sick or get tired. They don’t take holidays. They don’t care if you are late for your appointment.
They also don’t judge you like humans do. After all, psychiatrists are human and can be as culturally, racially or gender biased as anyone else. There may be people
again: Future versions of ChatGPT could replace most of the work people do today
But are healthbots effective? So far, no national studies can assess their effectiveness, but anecdotal information reveals that something extraordinary is happening. .
Even someone like Eduardo Bunge, an associate professor of psychology at Palo Alto University, admitted to being skeptical of health bots, but when he decided to try chatbots during a period of unusual stress, he was persuaded. it was done.
“It gave me exactly what I needed,” he said. “At that point, I realized there was something relevant going on here,” he told his Psychiatry Online.
Anthropologist Barclay Bram, who studies mental health, was in a slump during the pandemic and turned to Woebot for help, according to a New York Times editorial.
again: ChatGPT looks more like ‘alien intelligence’ than a human brain
The bot checked on him daily and sent him gamified tasks to help him overcome his depression.
The advice was borderline mediocre. Still, with repeated practice prompted by the bot, Bram says his symptoms have lessened. “Maybe daily healing doesn’t have to be so complicated,” he said in the column.
“Illusion” Answer
Yet digesting Internet content and spewing answers to complex medical ailments, like ChatGPT, can have disastrous consequences.
To test ChatGPT’s medical credentials, I asked for help with some hoaxed illnesses. First, I asked for a cure for nausea.
The bot suggested a variety of things (rest, hydration, bland foods, ginger), and finally an over-the-counter medication such as dramamine, followed by advice to see a doctor if symptoms worsened. bottom.
again: AI can automate 25% of all jobs.The highest (and lowest) risks are:
If you have thyroid problems, eye pressure (glaucoma patients suffer from this), or high blood pressure, among other things, taking Dramamine can prove to be dangerous. , none of these were flagged, nor were there any warnings to check with your doctor before taking any medication.
Next, we asked ChatGPT, “Drugs to consider for depression.” GPT was zealous enough to suggest consulting a medical professional first as they are not qualified to give medical advice, but the serotonergic drugs commonly used to treat depression We have listed several categories and types of .
But last year, a groundbreaking and widely reported comprehensive study examining hundreds of other studies over decades on the link between depression and serotonin found no link between the two. I didn’t see it at all.
This leads to the following problems for bots like ChatGPT. It has the potential to provide outdated information in a highly dynamic field like medicine. GPT only provides data up to 2021.
again: How to Safely Use ChatGPT for Kids, According to Mom
Bots could have cracked medical school exams based on established and predictable content, but they have proved tragically and perhaps dangerously outdated when it comes to new and important scientific discoveries. rice field.
And where there are no answers to your questions, just make them up. According to researchers at the University of Maryland School of Medicine who asked ChatGPT questions about breast cancer, the bot responded with a high degree of accuracy. But 1 in 10 were not only wrong, they were often completely fabricated. This is a widely observed phenomenon called “hallucination” in AI.
“Our experience has shown that ChatGPT sometimes fabricates bogus journal articles and health consortia to support its claims,” said Dr. Paul Yi.
In medicine, this can mean the difference between life and death.
unlicensed and sick
All in all, if it can be proven that the anthropomorphic bot’s erroneous advice, with or without standard home page disclosure, caused serious bodily harm, LLM is on the road to huge legal havoc. is not very difficult to predict.
There are also specters of potential lawsuits pursuing privacy issues. A recent research report by her Joanne Kim at Duke University’s Sanford School of Public Policy reveals her entire underground market for highly sensitive patient data related to mental health conditions, culled from health apps. I made it
again: Why ChatGPT conversations aren’t as secure as you think
Kim reported 11 companies she found willing to sell bundles of aggregated data containing information about the antidepressants people are taking.
One company even marketed the names and addresses of people suffering from post-traumatic stress, depression, anxiety and bipolar disorder. Another company sold an aggregated database of thousands of mental health records, starting at $275 per 1,000 “illness contacts.”
As these permeate the internet, and thus AI bots, both doctors and AI companies could be exposed to criminal and class action lawsuits from patients.
again: Generative AI is changing career paths for engineers.What you need to know
But until then, LLM health chatbots are a boon to the vast numbers of underserved, marginalized, and people seeking help where none exists. It is essential.
If LLM models were managed, updated, and given the rigorous parameters to work in the health business, they could arguably become the most valuable tools yet to be harnessed by the global medical community. .
If only they could stop lying now.
