Physicians who use artificial intelligence in their work run the risk of being viewed as less competent by their colleagues, according to a recent study from Johns Hopkins University.
Generative AI has great potential for medical advances, but a new study finds that its use in medical decision-making impacts how doctors are perceived by their colleagues. This study shows that physicians who primarily rely on generative AI for decision-making face considerable skepticism from fellow clinicians who correlate the use of AI with a lack of clinical skill or overall competency, resulting in a lower perceived quality of patient care.
The study, funded by the 2022 Johns Hopkins Discovery Award, involved a diverse group of clinicians from a major hospital system, including attending physicians, residents, fellows, and advanced practice providers. The study results were published in August. [Nature Digital Medicine](Nature Digital Medicine).
Stigma impedes better care
This finding may indicate societal barriers to the adoption of AI in healthcare settings, which could slow progress that could lead to improved patient care.
“That kind of bias, rather than the technology itself, can be the barrier to better care.”
Tinglong Dai
Carey Business School Professor
“AI is already clearly a part of healthcare,” says Tinglong Dai, a business professor at the Johns Hopkins Carey School of Business and co-author of the study. “What struck us was that doctors who use technology to make medical decisions may be viewed as less competent by their colleagues. That kind of bias, rather than the technology itself, can be an obstacle to better care.”
The study, conducted by researchers at Johns Hopkins University, involved a randomized experiment in which 276 practicing clinicians evaluated various scenarios. Some doctors did not use AI, some used AI as their primary decision-making tool, and others used AI for validation. The study found that as doctors become more reliant on AI, they face an increasing “competency penalty,” meaning they are viewed with more skepticism by their colleagues than those who do not rely on AI.
“Even in the age of AI, human psychology remains the ultimate variable,” says Haiyan Yang, lead author of the study and academic program director in the Master of Science in Management program at the Carey School of Business. “How people perceive the use of AI can be as important, if not more important, than the performance of the technology itself.”
Skip the AI and you'll get more respect.
According to this study, physicians who rely on AI have a negative impact on the perceptions of their colleagues. By positioning generative AI as a “second opinion” or validation tool, negative perceptions from colleagues have been partially ameliorated, but not completely eliminated. However, not using GenAI resulted in the most favorable reviews from colleagues.
This finding is consistent with theory suggesting that reliance on external sources, such as AI, may be seen as a weakness by clinicians.
“As AI becomes part of the future of healthcare, it is important to recognize its potential to complement rather than replace clinical judgment, ultimately enhancing decision-making and improving patient care.”
Lisa Wolf
Associate Professor of Pediatric Endocrinology, Johns Hopkins University School of Medicine
Ironically, while the visible use of GenAI may undermine physicians' perceived clinical expertise among their colleagues, the study also found that clinicians still perceive AI as a valuable tool for improving the accuracy of clinical assessments. This study showed that clinicians still generally recognize the value of GenAI for improving the accuracy of clinical assessments and believe that customized GenAI within their institutions would be even more useful.
The collaborative nature of the research resulted in thoughtful proposals for the implementation of GenAI in healthcare settings. This is important to balance innovation with maintaining professional trust and physicians' reputations, the researchers noted.
“Physicians value clinical expertise, and as AI becomes part of the future of medicine, it is important to recognize its potential to complement rather than replace clinical judgment, ultimately enhancing decision-making and improving patient care,” said study co-author Lisa Wolff, associate professor of pediatric endocrinology at the Johns Hopkins University School of Medicine, who also holds a position at the Carey School of Business.
