Nearly three in 10 UK GPs use AI tools such as ChatGPT when seeing patients, despite the risk of patients making mistakes and being sued, a study has found.
While the adoption of AI to reduce workloads is progressing rapidly, a lack of regulation of the technology in the ‘wild west’ leaves GPs unsure of which tools are safe to use. This is the conclusion of a study by the Nuffield Trust think tank, based on research by the Royal College of General Practitioners on AI and GP focus groups of 2,108 GPs.
Ministers hope AI will help reduce the delays patients face when seeing their GP.
The study found that a growing number of GPs are using AI to create summaries of patient encounters, assist in diagnosing patient conditions and with routine administrative tasks.
A total of 598 (28%) of the 2,108 survey respondents said they were already using AI. More men (33%) than women (25%) use GPs, and far more GPs are used in wealthy areas than in poorer areas.
It is rapidly becoming more widely used. However, the majority of GPs are concerned that practices that adopt it, whether they use it or not, could face “professional liability and medico-legal issues”, “risk of clinical error” and “patient privacy and data security” issues as a result, the Nuffield Trust report said.
“The government is excited about the potential for AI to transform the NHS, but there is a huge gap between policy ambitions and the current chaotic reality of how AI is being deployed and used in general practice,” said general practitioner Dr Bex Fisher, research and policy director at the think tank.
“It is very difficult for GPs to use AI with confidence when faced with the large number of tools that are not regulated at a national level in the NHS,” she added.
Some NHS community care boards support the use of AI, while others ban it.
To the minister’s dismay, the study also found that GPs were using the time saved to recover from the stresses of their busy lives, rather than seeing more patients. “While policy makers would like this saved time to be used to offer more appointments, GPs reported using this time primarily for self-care and rest, including reducing overtime to prevent burnout,” the report adds.
Another study published last month in Digital Health on how UK GPs are using AI contained similar findings. It was found that the percentage of people using AI increased from 20% to 25% compared to the previous year.
“In just 12 months, generative AI has gone from taboo to tool in British medicine,” said the study’s lead author, Dr. Charlotte Blais from Uppsala University in Sweden.
Like the Nuffield Trust, she highlighted the lack of regulation as a key concern, particularly given the speed at which GPs are integrating AI into clinical practice. “The real risk is not that GPs are using AI, but that they are using it without training or supervision,” Breese says.
“AI is already being used in everyday medicine. The challenge going forward is to ensure that it is deployed safely, ethically and openly.”
According to Healthwatch England, patients are also increasingly using AI to improve their healthcare, such as when they are unable to book a GP appointment.
“Our recent research shows that patients continue to trust the NHS with their health information, but around one in 10 (9%) now use AI tools to get information about how to stay healthy,” said Chris McCann, deputy chief executive of the patient watchdog group.
“There are many reasons why people turn to AI tools, such as when they don’t have access to GP services. However, the quality of advice from AI tools is inconsistent. For example, one person received advice from an AI tool that confused shingles with Lyme disease.”
A committee set up by the government in September on how to ensure AI is used in a safe, effective and well-regulated manner will make recommendations in its report.
The Department of Health and Human Services has been contacted for comment.
