Artificial intelligence in healthcare: Where does the responsibility lie?: Clyde & Company

Applications of AI


There is no getting away from the fact that artificial intelligence (AI) is becoming more commonplace in our daily lives. From questions to Copilot and Chat GPT to driverless cars and AI in the medical field. The use of AI is continually evolving.

As the Government's plans to digitalize the healthcare sector are announced, the reality is that AI is already making great strides, and it is important to consider how its use will impact insurers and those responsible for making claims.

AI is used in a variety of ways in the medical field, from analyzing X-rays, mammograms, and skin samples to being used as a virtual scribe to take notes during appointments. AI is being used by clinicians in primary, secondary, and tertiary settings, for example to assist in analyzing test results and images, and to reduce time spent on administrative tasks.

While AI is expected to benefit both patients and healthcare professionals, it is equally important to exercise caution when using AI. AI tools rely on the data that enters them. Systems can learn, but they must learn from existing data. This data needs to be broad enough to represent society as a whole, not just a few patients. For developers and users of AI systems, it is important to ensure the reliability of input data and maintain reliability to avoid the possibility of unreliable results. Data entry and cleansing issues can be the cause, so people using the system are encouraged to raise concerns about the results the AI ​​produces and ensure that they are rigorously inspected for inaccuracies.

When medical professionals make clinical decisions, it is well known who is responsible when something goes wrong. It is currently unclear in what capacity AI is involved, and there are many possibilities as to who is responsible. That may fall to the clinician using the technology (i.e., when entering data or interpreting the information generated), the healthcare organization that has implemented the AI ​​system, the entity that developed the technology, or the entity that has approved the use of the technology in a healthcare setting. If something goes wrong, various legal frameworks may apply, including negligence, product liability, and vicarious liability. As AI is a very developing field, it is unclear at this point how courts will approach its use. There will inevitably be claims about the use or non-use of AI in patient clinical processes and associated decision-making. Contracts for the use of AI should be carefully considered, including liability and indemnity clauses.

The question may arise as to how exactly standard treatments are evaluated when used in clinical practice. Parties to claims typically instruct independent medical experts to assist the court in determining liability, but can the same medical experts continue to comment on where AI has been used, or do they need to instruct an AI expert in addition to a medical professional to explain how the technology works? Or do medical professionals need to be familiar with the use of AI in their own practice to produce a report? Consideration should be given to whether the familiar Bolam test would work in situations where a person has made a recommendation but reasonable clinical opinion does not agree with that recommendation.

It could be argued that not using AI is a breach of duty, as AI is continually learning from available data, and that AI could become so advantageous or accurate in clinical practice in the future, that some patients might specifically request its use. If AI is not available, is not using it a breach of duty, and should the consent process include the risks associated with the use of AI?These are questions that clinical negligence lawyers will need to address, and that medical law professionals will need to address at all times, as the use of AI in healthcare increases.

As well as the different legal frameworks that potentially apply, consideration should also be given to how AI will be used and whether this will impact on liability. For example, when using AI to arrive at a diagnosis, as with the use of other diagnostic tools, the legal responsibility lies with the person making the diagnosis (similar to a standard negligence claim). Some argue that this may depend on how a particular algorithm is used and whether it is an AI or a clinician who arrives at a diagnosis. This raises the question of where the responsibility lies if clinicians are completely removed from making a diagnosis. Some argue that when algorithms influence decisions or arrive at diagnoses, the question can be whether clinicians can understand and explain how they arrived at their diagnoses. Because if you can't explain it, can you really take responsibility if something goes wrong?

Healthcare providers who use AI in clinical settings need to be aware of how AI is being used and what it is being used for, as it may help determine liability. Having policies and operational procedures in place that provide guidance on the role of AI in arriving at a diagnosis may also help healthcare providers receive training from AI developers to help them understand how AI can help and how it works. You may want to consider documenting in your medical record whether and how AI was used for diagnosis. This is because it might be helpful if something goes wrong and you need to answer these questions.

Despite the use of AI in healthcare, it is important that a qualified human remains the one making the final diagnosis and discussing and agreeing the treatment journey with the patient (though AI may assist in that process by providing information about expected treatment outcomes, risks associated with the options under discussion, etc.). The NHS England Transformation Directive of 30 April 2025 states, inter alia: “Final decisions about the care people receive should be made based on professional judgment and in consultation with patients or service users.”. This situation is likely to change as the use of AI becomes more widespread and accountability evolves as well when things go wrong.



Source link