Article In Brief
Artificial intelligence is poised to transform health care, and doctors warn that now, not later, is the time to address the ethical issues sure to arise as medical researchers and clinicians adopt the technology.
As artificial intelligence (AI) moves quickly from being a curiosity to a reality in all facets of life, health care providers and administrators are focusing on how to ensure AI-related technology will help advance the care of patients—while also protecting patient privacy and the doctor-patient relationship at the core of medicine.
Doctors predict that AI will transform health care, but they warn that now, not later, is the time to address the ethical issues sure to arise as medical research and everyday clinical care start using the technology.
A key question is how to protect patient privacy since medical information likely will be fed into an AI program to determine the best treatment plan based at least in part on a vast database of clinical trial results and real-world cases.
As doctors interact more with AI-related tools, whether for clinical decision-making or improving practice efficiency, the patient protections that regulations and institution-specific polices provide will have to be reviewed. They may need to be adjusted for a health care environment where AI tools such as ChatGPT or GPT-4 can answer simple and complex questions as well as write essays that may read better than those real people produce.
In addition to helping plan treatment, AI-related tools likely will help patients better understand their diseases, document patient encounters in electronic health records, and craft appeal letters to insurance companies after a patient is denied coverage. But another question looms: how will doctors know that AI-generated information is accurate and trustworthy and that it serves to improve patient care and outcomes?
“We are talking about technology that is so transformative that it will touch every aspect of how we practice medicine,” said Daniel Goldenholz, MD, PhD, an epileptologist at Beth Israel Deaconess Medical Center and assistant professor of neurology at Harvard Medical School who has written about ethical issues of AI in health care.
Dr. Goldenholz coauthored a 2021 Neurology paper that summed up these concerns: “A fundamental question that the epilepsy and broader neurology community must answer in coming years is the degree of responsibility each party (researchers, industry, clinical, regulatory agencies) carries in the AI pipeline, in order to facilitate a goal of ensuring that AI promotes rather than endangers clinical practice.”
The paper noted that the four “core principles of bioethics—respect for patient autonomy, beneficence, nonmaleficence, and justice—are pertinent in AI.” It recommended adding a fifth principle to the list: the transparency of process.
Dr. Goldenholz told Neurology Today that clinicians shouldn’t sit back and wait for AI to come to them. “I think that every single doctor, and especially every single neurologist, needs to be aware of the potential pitfalls and dangers of these tools” as well as the potential benefits, he said.
Many “overburdened physicians who have too much paperwork are going to be thrilled to use these tools,” Dr. Goldenholz said, but at the same time, physicians will have to learn “which tools to trust and how to use them to their patient’s benefit.”
Recently he experimented with using GPT-3.5 to write a letter of appeal for an off-label use of an epilepsy drug. He said the letter was stylistically fine, but it contained an inaccurate statement about which professional societies backed the use of the drug as well as a false statement about approval by the US Food and Drug Administration. He then edited that version to produce an accurate letter to submit to the insurance company, which he said took less time than starting from scratch.
Determining Its Use
Some people have experimented with the AI-based technology ChatGPT, which has evolved from being a novelty at family gatherings to a tool poised to help improve people’s health and well-being.
Many health care organizations and professional groups have begun formulating guidelines on AI’s use. One recent example is the “Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare,” a guide the Coalition for Healthcare AI (CHAI) released in April. It outlines the potential of AI in medical education, patient care, and administrative tasks, such as billing, but also potential harms, such as privacy breaches and dissemination of inaccurate, biased, and even harmful medical information.
Andrew Coyne, chief information security officer at Mayo Clinic, said the arrival of AI technology reminds him of a time when microcomputers started appearing in people’s homes, eventually transforming communications.
“A massive new vista is opening,” he said. “You hear a lot of hyperbole around AI, but it really does look like a whole new opportunity to improve care that wasn’t here before.”
Coyne said that despite the newness of AI-based technology, the same rules of patient privacy and data security in place for other technologies apply. For instance, outside vendors that health care systems deal with, whether for electronic health records or billing, must follow similar rules of privacy and security.
In incorporating tools from AI companies, he said, “we would want to be as thoughtful as we are with using other vendors.” Likewise, doctors already know not to use their personal email accounts to deal with patient information, so it wouldn’t be OK to use ChatGPT in the patient setting outside of a hospital-vetted system.
Coyne said the federal HIPPA privacy rule was written quite broadly, which “is a good thing because it enables regulators to allow health care to experiment with new technology and over time increase the specificity of requirements.” He said the question that needs to be repeatedly asked as AI tools are added to the health care system is, “What is the right thing to do for the patient?”
David Jones, MD, an assistant professor of neurology and radiology at Mayo Clinic, does not believe that being an all-out naysayer about AI is a constructive stance for physicians to take given that the technology is a reality, not a theoretical possibility.
“The transformation is occurring, and it (AI) is going to be a powerful tool that is going to change the way we do things,” he said.
Some doctors worry that if they don’t understand AI technology, they cannot ethically use it with their patients. Dr. Jones said doctors won’t need to grasp every detail of the workings of an AI application but rather know enough to recognize when the information generated is trustworthy and merits consideration.
Using the analogy of driving a car, he said, “We drive a car and don’t know about carburetors,” but most drivers do know the rules of the road that allow them to drive safely.
More Access to Care
Dr. Jones, who conducts research and sees patients with Alzheimer’s disease and dementia in his clinical practice, said he believes the field of neurology could especially benefit from AI tools because the availability of “neurologic expertise is limited, and the data is dense and complex.”
“It could really democratize expert neurologic care,” he said, noting that AI tools might allow more equal access to top-notch neurologic care for people who live far from centers of expertise or face barriers to accessing health care because of their life circumstances.
With AI-based tools, a community doctor who might only occasionally see a given neurologic disease could tap into a broad pool of cases, research findings, and expert experience to inform diagnosing and planning a treatment. Many general practitioners and specialists already use clinical support tools to calculate disease risk and guide prescribing, but those tools likely will seem rudimentary compared to upcoming AI-based programs with large, up-to-date databases that are more representative than any clinical trials might be.
Dr. Jones said neurologists must routinely gather and interpret a wide range of information to make a neurologic diagnosis, which isn’t always easy or straightforward. For a patient who presents with signs of dementia, for instance, the clinician reviews and considers myriad factors—the clinical exam, a complex clinical history, neuropsychological testing, brain MRIs, multiple types of nuclear medicine-based brain scans (including FDG-PET, tau-PET and amyloid-PET), spinal fluid studies, electrophysiology, and various blood tests—before creating a treatment plan.
One of Dr. Jones’s research interests focuses on using machine learning to interpret FDG-PET scans, which look for differences in brain metabolism related to aging and neurodegenerative causes of dementia, but the test results can be tricky to analyze. He is developing an AI program that includes a database of imaging cases from the Mayo Clinic that could help.
Dr. Jones, who heads AI efforts for Mayo’s department of neurology, believes it is critical that research be at the heart of adopting AI into clinical care. In a paper he coauthored for Neurology in 2022, Dr. Jones noted that “putting the needs of the patient first and adhering to tenets of rigorous research have driven health care innovations of the past, and this primary value must also drive the AI/ML [AI/machine learning] based innovations of the future.”
Possible Bias
The AAN recently established the Quality Informatics Subcommittee to address these and other issues. The new subcommittee will look at the application of AI and machine learning in neurology, for example, including the ethical implications of the use of these tools.
Subcommittee Chair Lidia Moura, MD, PhD, MPH, FAAN, a neurophysiologist and epidemiologist who specializes in epilepsy at Massachusetts General Hospital, said one concern is ensuring that AI/ML-generated information is fair and unbiased.
Medicine “has a history scarred by systemic racism, evident in the fact that the majority of clinical trials are predominantly composed of White individuals, those with private insurance, and English speakers,” said Dr. Moura, who also is associate professor of neurology at Harvard Medical School.
“Additionally, clinical guidelines and risk assessment tools often fail to incorporate the valuable insights gained from [people from] disadvantaged backgrounds and other minority groups.”
One hope for AI/ML technologies is that they will draw from more diverse and representative databases, but that isn’t necessarily a given, she said.
“I think being aware of potential bias is really critical,” Dr. Moura said. She said large language models, a type of AI tool, could propagate bias “without us being aware of it.”
While large language models themselves don’t have personal opinions or biases, she said, they learn from the text data they are trained on. If the training data contains biased language, stereotypes, or discriminatory attitudes that were consciously or unconsciously (or unintentionally) present in the dataset, the model can unknowingly absorb and reflect those biases in its output.
Allison L. Weathers, MD, FAAN, a neurologist and enterprise associate chief medical information officer at Cleveland Clinic who co-chairs the AAN’s Quality Informatics Subcommittee, said there should be transparency at every phase of AI development and implementation, especially given that so much medical misinformation is circulating in the digital world.
Users of an ML program need to know the basics of “what goes into it and how a score or other information is being generated,” she said, adding that the intended use of the AI-based tool should also be clear to the practitioner because tools created with a good intention may lead to unintended consequences, especially if the original intent of the tool is lost over time.
“There might be a model that is used to predict whether a patient will need more support after surgery in order to optimize their recovery,” she said. The intent is to make sure that the patient gets more attention, more time, and perhaps more wrap-around services. But that same information could be used to determine that certain patients may not do well, “so we should not offer the choice of doing surgery on them,” Dr. Weathers said.
Dr. Moura said that while ethical issues no doubt will need to be addressed, solutions based on ML could end up arriving at just the right time for health care.
“We are living in an environment where physician burnout is growing,” she said. “The issue of health care efficiency has long been a problem, while the concern for sustainability in health care continues to grow. If we can increase the efficiency of care teams, find ways in which we can improve the physician-patient interaction, we will find ways we can provide better care.”
Learn the Facts First
Allan D. Wu, MD, FAAN, professor of neurology and director of applied clinical informatics at Northwestern University, said the debates and discussions around AI are somewhat similar to those that swirled around the introduction of electronic medical records. There were worries about patient privacy, security breaches, and the possibility of putting even more demands on overworked physicians and staff.
Dr. Wu, who chairs the AAN Practice Management and Technology Subcommittee, said the first priority of physicians should be to educate themselves about AI, whether by taking courses or attending seminars at professional meetings by the AAN and others. He said the tech industry is making many promises, but clinical neurologists should “to be cautious and get educated first.”
Dr. Wu said physicians also need to have a clear reason for adopting a new technology. Good questions to ask include, “What things in your practice do you want to improve, and what will it cost to do it?” New technology needs to be useful, adding a “high-value gain” to the practice of medicine, he said. New technology also can’t be viewed in isolation.
“Nothing that happens in the clinic is about the technology. It’s about the people, the system of care,” Dr. Wu said. “Even when AI matures, it is going to require a physician and care team to provide context to the patient” as part of responsible decision-making.
“After you educate yourself about the risks and benefits of AI, make sure you go into it carefully and responsibly,” Dr. Wu said. “You can’t just adopt something that affects people’s lives … without recognizing the risks and recognizing the benefits.”
What Is Artificial Intelligence?
One initial challenge within the physician community and the broader health care system is getting people to understand terminology. Microsoft’s website defines artificial intelligence as “the ability of a computer system to deal with ambiguity, by making predictions using previously gathered data, and learning from errors in those predictions in order to generate newer, more accurate predictions about how to behave in the future.”
Machine learning, already in many industries, is a subset of AI. Microsoft defines machine learning as “the process of using mathematical models of data to help a computer learn without direct instruction.” It uses “algorithms to identify patterns within data, and those patterns are then used to create a data model that can make predictions. With increased data and experience, the results of machine learning are more accurate.”