In a recently published review article, npj Digital MedicineThrough a systematic review, researchers investigated the ethical implications of introducing large-scale language models (LLMs) in the medical field.
Their conclusions indicate that while LLM offers significant benefits such as enhanced data analysis and decision support, persistent ethical concerns regarding fairness, bias, transparency, and privacy highlight that its application requires clear ethical guidelines and human oversight.
study: Ethics of ChatGPT in Medicine and Healthcare: A Systematic Review of Large-Scale Language Models (LLMs)Image credit: Summit Art Creations/Shutterstock.com
background
LLM has attracted widespread interest due to its advanced artificial intelligence (AI) capabilities, which have been prominently demonstrated since OpenAI released ChatGPT in 2022.
The technology is rapidly expanding into various fields, including medicine and healthcare, and is showing promise for clinical decision-making, diagnosis, and patient communication tasks.
But along with its potential benefits, concerns have also emerged about its ethical implications: previous studies have highlighted risks such as the dissemination of inaccurate medical information, the violation of privacy due to the handling of sensitive patient data, and the perpetuation of biases based on gender, culture, or race.
Despite these concerns, there is a notable gap in comprehensive research that systematically addresses the ethical challenges of integrating LLMs into healthcare, with existing literature focusing on specific cases rather than providing a holistic overview.
Method
Addressing existing gaps in this field is crucial, as healthcare environments require strict ethical standards and regulations.
In this systematic review, the researchers mapped the ethical landscape surrounding the role of LLMs in healthcare and identified potential benefits and harms to inform future discussions, policies, and guidelines to govern the ethical use of LLMs.
The researchers designed a review protocol regarding practical application and ethical considerations and registered it in the International Register of Prospective Systematic Reviews. No ethical approval was required.
They collected data by reviewing preprints and searching relevant publication databases and preprint servers, given their prevalence in the technical field and their potential relevance not yet indexed in databases.
Inclusion criteria were based on intervention, application setting, and outcomes, with no restrictions on publication type, but excluded works solely related to medical education or academic articles.
After an initial screening of titles and abstracts, data were extracted and coded using a structured form. Quality assessment focused descriptively on procedural quality criteria to distinguish peer-reviewed material and critically examined findings for relevance and comprehensiveness during reporting.
Investigation result
The study analyzed 53 papers to explore the ethical implications and applications of the LLM in healthcare. Four main themes emerged from the study: clinical applications, patient support applications, supporting healthcare professionals, and public health perspectives.
In clinical applications, LLM has shown potential to aid in early diagnosis and triage of patients by using predictive analytics to identify health risks and recommend treatments.
However, concerns have been raised about its accuracy and the possibility of bias in the decision-making process that could lead to incorrect diagnoses and treatment recommendations, highlighting the need for careful oversight by medical professionals.
The patient support application focuses on helping LLMs help individuals access medical information, manage symptoms, and navigate the health care system.
Although LLMs can improve health literacy and communication across language barriers, data privacy and the reliability of medical advice generated by these models remain important ethical considerations.
LLM aims to support medical professionals, automate administrative tasks, summarize patient interactions, and facilitate medical research.
While this automation has the potential to improve efficiency, there are concerns about the impact on specialist skills, the integrity of research findings, and the potential for bias in automated data analysis.
From a public health perspective, LLMs provide opportunities to monitor disease outbreaks, improve access to health information, and enhance public health communication.
But the study highlights risks, including the spread of misinformation and the concentration of AI power in the hands of a few companies, that could exacerbate health disparities and undermine public health efforts.
Overall, LLM offers a promising advancement in healthcare, but ethical implementation requires careful consideration of bias, privacy concerns, and the need for human oversight to mitigate potential harm and ensure equitable access and patient safety.
Conclusion
The researchers found that LLMs such as ChatGPT are being widely studied in the healthcare sector due to their potential to improve efficiency and patient care by quickly analyzing large datasets and providing personalized information.
However, ethical concerns remain, including bias, issues of transparency, and the generation of misleading information known as hallucinations, which could have serious consequences in clinical practice.
The study is in line with broader research on AI ethics and highlights the complexities and risks of introducing AI into healthcare.
The strengths of this study are the comprehensive literature review and the structured classification of LLM applications and ethical issues.
Limitations include the underdeveloped nature of ethical review in this field, reliance on preprint sources, and a predominance of perspectives from North America and Europe.
Future research should focus on defining robust ethical guidelines, increasing algorithmic transparency, and ensuring equitable deployment of LLMs in global healthcare settings.