From digital scribes to ChatGPT, artificial intelligence (AI) is rapidly being introduced into GP clinics. A University of New Sydney study warns that technology is racing to outpace safety testing, putting patients and the health system at risk.
The study, published in The Lancet Primary Care, synthesized global evidence on how AI is being used in primary care, using data from the US, UK, Australia, several African countries, Latin America, Ireland and other regions. We found that while AI tools such as ChatGPT, AI scribes, and patient interaction apps are increasingly being used for clinical questioning, documentation, and patient advice, most are being deployed without thorough evaluation or regulatory oversight.
“Primary care is the backbone of the health system, providing accessible and continuous care,” said Associate Professor Liliana Larranjo, lead researcher and Horizon Fellow at Westmead Center for Applied Research. “AI can relieve pressure on overstretched services, but without safeguards, we risk unintended consequences for patient safety and quality of care.”
GPs and patients are focused on AI, but evidence lags
Primary care is under strain around the world, from workforce shortages to clinician burnout to increased healthcare complexity, all of which have been exacerbated by the COVID-19 pandemic. AI has been touted as a solution with tools that save time by summarizing consultations, automating management, and supporting decision-making.
In the UK, one in five GPs reported using generative AI in clinical practice in 2024. However, this review found that most research into AI in primary care is based on simulations rather than real-world clinical trials, leaving significant gaps in effectiveness, safety, and equity.
The exact number of GPs using generated AI in Australia is not known, but it is estimated to be 40%.
“AI is already being implemented in our clinics, but without Australian data and proper oversight on the number of GPs using AI, we are flying safety blind,” Associate Professor Larranjo said.
AI scribes and ambient listening technologies can reduce cognitive load and improve GP job satisfaction, but they also come with risks such as automation bias and loss of important social or biographical details within medical records.
“Our study found that many GPs using AI scribes do not want to go back to typing. They say AI speeds up consultations and allows them to focus on their patients, but these tools can miss important personal information and can introduce bias,” said Associate Professor Laranjo.
For patients, symptom checkers and health apps promise convenience and personalized care, but their accuracy varies widely and many lack independent assessment capabilities.
“Generative models like ChatGPT may sound convincing, but they're actually wrong,” says Larranjo. “They often agree with users even if they are wrong, which is dangerous for patients and difficult for clinicians.”
AI fairness and environmental risks
Experts warn that while AI promises faster diagnosis and more personalized care, bias can creep in and deepen health disparities. For example, dermatology tools often misdiagnose dark skin tones, which are typically underrepresented in training datasets.
Conversely, researchers say that if designed well, AI can address inequality. An arthritis study used an algorithm trained on a diverse dataset to double the number of Black patients eligible for knee replacement surgery and more accurately predict patient-reported knee pain compared to standard physician X-ray interpretation.
“If we ignore socio-economic factors and universal design, AI in primary care could go from being a breakthrough to a setback,” said Larranjo.
The environmental costs are also huge. Training GPT-3, a version of ChatGPT released in 2020, emitted as much carbon as 188 flights between New York and San Francisco. Data centers currently consume around 1% of the world's electricity, and in Ireland data centers account for more than 20% of the country's electricity usage.
“The environmental impact of AI is an issue,” said Associate Professor Laranjo. “We need a sustainable approach that balances innovation with equity and the health of our planet.”
Researchers are calling on governments, clinicians and technology developers to prioritize:
- Robust evaluation and real-world monitoring of AI tools
- A regulatory framework that keeps pace with innovation
- Educating clinicians and the public to improve AI literacy
- Stigma mitigation strategies to ensure equity in health care
- Sustainable practices to reduce the environmental impact of AI.
“AI offers an opportunity to rethink primary care, but innovation should not come at the expense of safety or equity,” said Associate Professor Laranjo. “Partnerships across sectors are needed to ensure that AI benefits everyone, not just the tech-savvy or well-resourced.”
/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.
