Experts warn of safety risks as technology outpaces regulations

AI News


Primary care is under strain around the world, from workforce shortages to clinician burnout to increased healthcare complexity, all of which have been exacerbated by the COVID-19 pandemic. AI has been touted as a solution with tools that save time by summarizing consultations, automating management, and supporting decision-making.

In the UK, one in five GPs reported using generative AI in clinical practice in 2024. However, this review found that most research into AI in primary care is based on simulations rather than real-world clinical trials, leaving significant gaps in effectiveness, safety, and equity.

The exact number of GPs using generated AI in Australia is not known, but it is estimated to be 40%.

“AI is already being implemented in our clinics, but without Australian data and proper oversight on the number of GPs using AI, we are flying safety blind,” Associate Professor Larranjo said.

AI scribes and ambient listening technologies can reduce cognitive load and improve GP job satisfaction, but they also come with risks such as automation bias and loss of important social or biographical details within medical records.

“Our study found that many GPs using AI scribes do not want to go back to typing. They say AI speeds up consultations and allows them to focus on their patients, but these tools can miss important personal information and can introduce bias,” said Associate Professor Laranjo.

For patients, symptom checkers and health apps promise convenience and personalized care, but their accuracy varies widely and many lack independent assessment capabilities.

“Generative models like ChatGPT sound convincing, but the facts are wrong,” says Associate Professor Laranjo. “We often agree with users even when they are wrong, which is dangerous for patients and difficult for clinicians.”



Source link