No one expects nice features from a chatbot. Certainly mediocre writing with some fabricated facts and a bit of random racism. I’ve written about these AI shortcomings myself.
But as we all know, these big new chatbots can also generate human-like responses to prompts and questions. And in a recent head-to-head test, that ability gave Bot a surprising edge in her one of the most essentially human of all activities: her activity as a doctor. rice field.
To conduct the test, a team of researchers from the University of California, San Diego, lurked on r/AskDocs, a Reddit forum where registered and certified medical professionals answer people’s medical questions. Researchers ranged from the silly (“I swallowed a toothpick. My friend said he was going to die”) to the terrifying (“Miscarried a day after a routine ultrasound?”). Nearly 200 questions were selected. He then typed a question into the bot ChatGPT’s virtual mouse and had another group of medical professionals blind-evaluate the answers he received from both the AI and the doctors.
The results were shocking. First, ChatGPT was far ahead of human doctors. ease of use. Almost without exception, the chatbot answers were rated 3-4 times more trustworthy for him than answers from poor humans. Moreover, the bot showed no disastrous propensity to hoax, as it often does in other situations.
But here is the most impressive part. The chatbot’s answers averaged him 7 times. sympathetic Just like from humans. 7 times! They provided what you look for in a doctor: care and emotional connection. It’s as if Mr. Data, the emotionless android, has figured out a way to convincingly mimic Dr. Crusher’s comforting demeanor at his bedside.
Admittedly, the bar to beating a human doctor when it comes to showing empathy is low. Still, the bot’s seeming ability to handle medical concerns, both in style and content, presages real-life, real-world use. I am skeptical that AI bots driven by large language models will revolutionize journalism or make Internet searches better. I’m open to the idea that coding software and analyzing spreadsheets will be faster. But I now think that a little tweak to chatbots could fundamentally improve the way people interact with their healthcare providers and the collapsing medical-industrial complex.
The purpose of the empathy experiment was not to show that ChatGPT could replace doctors and nurses. It was to show that chatbots can play a role in providing care. Our for-profit healthcare system does not employ enough doctors and nurses, who are expected to treat more patients on an assembly line. Nobody likes it, except those who get rich.
“People are cut off from healthcare and in a desperate situation,” said lead author John Ayers, a computational epidemiologist at the University of California, San Diego. So they look for answers on forums like r/AskDocs. “This is how patients do it now, and doctors didn’t register it.”
The pressure to answer these messages is very high. The COVID-19 pandemic has accelerated remote online contact between doctors and patients. Even in the first year of the pandemic, research found that doctors spent nearly an hour each work day processing their email inboxes. Add in support for other electronic medical record technologies, and some physicians spend half of their time each day in these interactions. Insurance companies often charge for the time spent responding to messages, which is enough to represent a potential revenue stream beyond face-to-face interactions.
Previous studies have explored whether patients and physicians prefer to use these messaging systems. Ayers looked to see if the system really existed. work. “We used real messages,” he says. “No one has ever done that before.” The results, based on interaction quality, were conclusive. “ChatGPT won by a landslide,” said Ayers. “This is probably primed for primetime.”
Based on the bot’s initial success, Ayers is ready to see what more the bot can do. “We want to start a randomized controlled trial that evaluates patient outcomes against patient outcomes,” he says. He evaluates not only whether the message is accurate or empathetic, but whether the message helps people to live healthier and live longer. What if a chatbot could help people recovering from a heart attack by reminding them to stay on a low-sodium diet, take their medication, and stay on top of the latest treatments? It could be life-saving for patients,” Ayers said.
Despite the promise of pet robots and AI psychotherapists in the tech industry, the idea of compassionate chatbots remains shaky and even dangerous.No one is thinking about ChatGPT actual care more than they think actual smart. But if the current broken health care system makes it impossible for humans to care for each other, fake care may actually save lives. Artificial intelligence assistants may be less human than humans, but they will likely be more human.
Rather than stupid chatbots, specialized AI systems are already pretty good at diagnosing. They are highly trained to detect one thing, such as tumors or sepsis, using specific test results as input. But they are expensive and difficult to build. As such, healthcare organizations are jumping on chatbots as a cheaper and more pervasive tool. Dozens of companies are working to develop applications for uses ranging from diagnosing illnesses to helping with the cumbersome paperwork that is the responsibility of both doctors and patients. If you’re lucky enough to have health insurance, your insurance company probably already has some kind of stupid chatbot that you can talk to before calling a human.
Ask people if they’re interested in the idea, and most people will say no. Sixty percent of Americans recently surveyed by the Pew Research Center said they don’t want an AI system to diagnose or suggest treatments for their ailments. But they’ll probably get it anyway. Please don’t tell anyone I said this. But much of what healthcare professionals do is already a bit formulaic, at least at the lowest level of patient-facing interfaces. You feel sick and call the advice nurse. They ask preset questions to determine if I should go to the ER or just go get some Tylenol.it is called triage. Or, if you have electronic access to your medical records, you might email your doctor to find out what the results from a new battery of tests mean.Either way, you don’t expect fun these encounters. It’s perfunctory. It’s also robotic.
Human clinicians are better off, aided by the knowledge base and processing power of AI systems.
Now, do you know anyone who’s good at robot stuff? Robots! Recently, a team of Harvard University researchers presented dozens of explanations for health problems to his three groups of doctors, people without medical training, and ChatGPT. They diagnosed everyone (and the object) with an illness and asked for triage recommendations.
Non-doctors were allowed to search the Internet, what medical professionals fearfully dubbed “Doctor Google.” But even with online assistance, the diagnosis in untrained humans was terrible. No shock there. However, as researchers report in a recent preprint (i.e., not yet peer-reviewed), chatbots’ diagnostic capabilities are about the same as human doctors (scores of 90% and above) (80% and above). score). And in triage, ChatGPT’s accuracy rate was slightly above his 70%. Sounds terrible compared to her 91% of doctors, but still. It’s a general-purpose chatbot that performs roughly as well as a well-trained doctor.
Now imagine adding to that skill set the mundane and time-consuming medical tasks that a chatbot should be able to handle, such as scheduling appointments, requesting insurance pre-approvals, and processing electronic medical records. please. “These are tasks that no one in the medical community has attempted, and they’re both physically and mentally exhausting, with severe headaches and an enormous amount of time lost,” said Teva Blender, a resident physician at the University of California, San Francisco. It’s a thing,’ he says. Perhaps a chatbot could generate at least the beginning of this kind of bureaucratic traffic, along with all emails to patients. “Doctors can skim it, say, ‘Yes, this is correct,’ and send it back,” Blender said.
That seems like a likely scenario. Highly trained chatbots work with doctors, nurses and physician assistants to provide more empathetic and complete answers to those in need of care. As Ayers’ team wrote in 2019, people are so desperate for medical help that they post images of their genitals on the r/STD subreddit in hopes of getting an accurate diagnosis. This is incredibly sad and a stunning indictment to our truly fucking and inhumane medical system.
In a system this poor, AI could actually lead to improvement. “Human clinicians, underpinned by the knowledge base and processing power of AI systems, will be even better,” says Jonathan Chen, a physician at Stanford University School of Medicine who studies AI systems. “It is quite possible that patients will seek imperfect medical advice from an automated system that is available 24/7, rather than waiting months for an appointment with a human expert.”
To make these AI-driven systems better, many people, including Ayers’ team, are currently working on developing smaller, fine-tuned language models based on medical information. The beauty of ChatGPT is that it is itself a generalist drawing populated from everything on the internet. But that’s how prejudice and misinformation creep in. Giving these medical chatbots access to people’s individual medical records would allow them to provide more precisely directed advice. “When this technology can access electronic medical records, that’s a real game changer,” says Ayers.
If the future of AI-powered health advice with access to medical records makes you uneasy, I don’t blame you. The bad sci-fi ending here is pretty dystopian. Despite years of effort, the Food and Drug Administration still does not have a ready framework to regulate AI and machine learning in medical devices. In particularly egregious cases, someone should resolve all liability issues regarding chatbot advice. Healthcare AI startups want the most economical, cheapest version, but that doesn’t always lead to the best patient outcomes. And if healthcare companies succeed in fine-tuning chatbots with cutting-edge medicine, any company could do the same for homeopathy or scented candles or anti-vaccine nonsense. These chatbots spew dangerous misinformation eloquently and empathetically.
“This is the worst case,” said Greg Collard, Google’s head of health AI. “This isn’t something people in Silicon Valley can do in isolation.” That means developing these systems in collaboration with medical professionals, not just medical executives, to ensure that the systems are private, secure, and actually You have to make sure it’s something that helps people.
It’s not easy, but it may be necessary. Unfortunately, our healthcare system was not built to provide decent caregivers for everyone. Until this situation changes, it would be nice to have robots that help us stay healthy. If we could simulate that they care about us at the same time, perhaps better than human doctors, it would still be a great message to receive.
Adam Rogers Senior correspondent for Insider.