Increased AI use in medicine
Who is responsible? A long-formed discussion
Legal considerations for AI use in healthcare
Future considerations to resolve debt issues
Most non-criminal cases go beyond who pays for the wrongdoing, and both parties seem to believe themselves to be innocent. With the increasing use of artificial intelligence (AI) in medicine, more individuals seem to be asking the same question. Who should be held responsible if a doctor uses medical AI systems for diagnosis and treatment and ultimately makes a mistake that harms the patient?
This article focuses on the legal implications of using AI for medical diagnosis and treatment recommendations.
Image credit: Have a good day photo/shutterstock.com
Increased AI use in medicine
AI technologies, including machine learning (ML) and deep learning (DL) models, are widely applied to hospitals and clinics around the world for a variety of applications, including stroke detection, diabetic retinopathy screening, and hospitalization prediction.1
Several studies have shown that the technology has provided significant benefits to healthcare systems by promoting smarter and faster solutions for both physicians and patients.2
By rapidly and efficiently analyzing large datasets, AI tools allow for accelerated disease diagnosis and monitor treatment response. For early cancer detection and diagnosis, radiologists use these AI-based algorithms to identify patterns of radiation images that are not visible to the human eye. 3
For example, AI algorithms are designed to analyze computed tomography (CT) and magnetic resonance imaging (MRI) data to screen patients with lung and prostate cancer, respectively.4
DL-based strategies have been used for early detection of breast cancer through interpretation of two- and three-dimensional mammographic images.5 Several studies have shown that AI improved overall accuracy when used as an auxiliary tool by radiologists interpreting mammograms.
Currently, many commercially available algorithms are not executed efficiently due to the lack of comprehensive data on clinical efficacy.6
Scientists used AI to facilitate automated characterization of intratumoral heterogeneity. This helps predict disease progression and therapeutic efficacy. The DL algorithm has been used to evaluate CT, MRI, and positron emission tomography (PET) scanned images.
Radioactive assessment of tumor morphology allowed for more accurate monitoring of the treatment response of solid tumors.
AI healthcare tools such as IBM Watson Health, Google Deepmind Health, Eyenuk, Ibex Medical Analytics, AIDOC, and Butterfly IQ are one of the most popular platforms used by physicians, radiologists, psychologists, and other healthcare professionals in treatment planning for a variety of diseases.
Who owns your medical data?
Who is responsible? A long-formed discussion
If AI errors lead to undesirable effects, doctors may shift responsibility to developers for defects in AI performance, and the company may point out that treatment decisions will ultimately be made by the physician.
In an age of ever-increasing AI use in the healthcare sector, it is important to understand who should be responsible if AI-based diagnosis or treatment plans are harmful to patients, namely AI developers, healthcare providers, or other stakeholders.
Currently, there is no significant liability among healthcare providers, AI system developers, and regulators overseeing false judgments that harm patients.
Therefore, a comprehensive policy is required to assign responsibility to protect patients. It also requires a greater degree of clarity to understand whether the entire AI supply chain is responsible.
The Future of AI in Medicine | Connor Judge | tedxgalway
Legal considerations for AI use in healthcare
Although the application of AI in medical diagnosis and treatment has been extremely beneficial, the technology is also associated with effective legal concerns regarding accountability, privacy and regulatory compliance.7
For example, AI tools rely on access to patient health data. This triggers questions about data privacy protection and transparency in how data is used. Regulations such as the Health Insurance Portability and Accountability Act (HIPAA) were established in 1996 to protect sensitive health information from disclosure.8
Opaque AI systems can perpetuate preferences by training data imbalances, and AI tools can exacerbate existing biases. AI systems can generate unfair or discriminatory treatment recommendations when training data on the demographics of a particular patient and narrowing down generalizations.
In most AI systems, their internal behavior remains undescribed as “black boxes,” reducing accountability for AI-guided decisions. More transparency. AI developers need to be transparent about device mechanisms, limitations and clinical validation.9
Although doctors are free to use AI, many people do not choose despite understanding the benefits, fearing that errors with AI tools can be billed by practicing medications below standard care.
Most healthcare AI tools can fail under U.S. Food and Drug Administration (FDA) regulations, as existing frameworks focus on medical devices rather than adaptive software algorithms.
Therefore, new regulations need to be developed to address medical AI in particular. This will encourage innovation to improve effectiveness and ensure user safety.
Is AI better than doctors at diagnosing infectious diseases?
Future considerations to resolve debt issues
The regulatory perspective of AI tools in medicine varies across the country based on factors such as risk tolerance and desire for innovation. Continuing international collaboration between governance and healthcare AI will play a key role in balancing this hurdle and promoting innovation and public well-being.10
Accurate regulations, accountability mechanisms, and technical standards are urgently needed to support the use of AI in medicine.
Scientists and policymakers believe that ongoing investigations of data bias, transparency and privacy are important to improve the accuracy and use of AI tools in the healthcare sector.
AI systems must provide the reason behind the diagnosis. This helps clinicians assess whether key features have been considered for disease diagnosis. Additionally, regulatory bodies need to establish mechanisms to assess the actual performance of AI systems to detect errors within devices.
reference
- Kang J, et al. Artificial intelligence across the field of oncology: current applications and emerging tools. BMJ Oncology. 2024; 3:E000134. doi.org/10.1136/bmjonc-2023-000134
- Junaid SB, Recent Advances in Emerging Technologies for Other Healthcare Management Systems: A Study. Healthcare (Basel). 2022; 10 (10): 1940. doi: 10.3390/healthcare10101940.
- Kolla L, ParikhRB. Use and limitations of artificial intelligence for oncology. cancer. 2024; 130 (12): 2101-2107. doi: 10.1002/cncr.35307.
- Elmore JG, Lee CI. Artificial intelligence in medical image learning from past mammography mistakes. Jama Health Forum. 2022; 3(2): E215207. doi: 10.1001/jamahealthforum.2021.5207.
- Wang L. Mammography with deep learning for breast cancer detection. Front Oncol. 2024; 14:1281922. doi: 10.3389/fonc.2024.1281922.
- Khan B, et al. The drawbacks of artificial intelligence and its potential solutions in the healthcare sector. Biometric devices. 2023; 1-8. doi: 10.1007/s44174-023-00063-2.
- Mennella C, Maniscalco U, De Pietro G, Esposito M. Ethical and regulatory challenges of AI technology in healthcare: a narrative review. Helion. 2024; 10 (4): E26297. doi: 10.1016/j.heliyon.2024.e26297.
- Public Health Act. Health Insurance Portability and Accountability Act of 1996 (HIPAA). (2024): https://www.cdc.gov/phlp/php/resources/health-insurance-portability-and-accountability-of-of-of-ypaa.html
- Fehr J, et al. Trustworthy AI Reality Check: Lack of Transparency in Artificial Intelligence Products in Healthcare. Front digit health. 2024; 6:1267290. doi: 10.3389/fdgth.2024.1267290.
- Morley J, Management of Data and Artificial Intelligence for Other Healthcare: Deepen International Understanding. JMIR Form Res. 2022; 6(1): E31623. doi: 10.2196/31623.
