Medical AI is moving faster than safety inspections – News

AI News


Getty Images

Flinders University experts have warned that artificial intelligence (AI) needs to be carefully evaluated and managed before it is widely adopted in healthcare, saying rapid progress does not automatically lead to safe use for patients.

In an expert commentary titled,AI can reason like a doctor. What comes next? Published in scienceFlinders researchers warn that while the new AI system has shown impressive capabilities, strong results in controlled studies do not mean it is ready for routine use in hospitals and clinics.

The authors say there is an urgent need to understand how emerging AI tools can be safely incorporated into daily clinical practice while keeping patient outcomes front and center.

Despite these caveats, researchers acknowledge that recent advances in AI are creating real opportunities to support physicians, especially in busy and high-pressure medical settings.

This commentary reviews new research showing that advanced reasoning-based AI systems can step through diagnostic scenarios and, in some cases, nearly match or even exceed the diagnostic performance of experienced physicians.

Eric Cornelis, PhD Candidate

Eric Cornelis, a PhD candidate at Flinders University and co-author of the commentary, said the change marks a shift from simple question-answering tools to algorithms capable of seemingly human-like clinical reasoning in text-based tasks.

But Flinders’ team emphasizes that real-world medicine involves more than text-based reasoning and test performance.

They say clinical practice relies on physical exams, listening to patients, understanding their medical and social context, and taking ownership of outcomes, elements that cannot be safely provided by current AI systems alone.

“Medical decision-making is complex, high-stakes, and very human, and accuracy alone, especially in text-based cases, cannot make a system safe for patients,” said Cornelis, a professor of medicine.
public health.

Associate Professor Ashley Hopkins

Lead author Associate Professor Ash Hopkins, an NHMRC investigator and leader of Flinders’ Clinical Cancer Epidemiology Laboratory, said modern medicine relies on judgment, accountability and ethical oversight.

“We have demonstrated that AI systems can reason about clinical problems with performance similar to that of physicians, particularly in the same scenarios used to train clinicians themselves. This provides a real opportunity to support clinicians in the future,” says Associate Professor Hopkins.

“Multiple stakeholders are currently working on AI frameworks in terms of legal, professional, or moral responsibility for AI decisions, and intentional and controlled integration into clinical care is now critically needed.”

This commentary highlights the known risks associated with poorly evaluated systems, including bias, inequitable care, and unintended patient harm.

“History has shown that when you introduce algorithms without adequate safeguards, they can lead to worse outcomes and can just as easily amplify problems as they solve them, especially if the system is trained on incomplete or unrepresentative data,” Cornelis says.

Looking ahead, Flinders researchers argue that enthusiasm for medical AI requires strong governance and clearer evaluation criteria.

“Physicians should not be allowed to practice without supervision or evaluation, and AI should be held to the same standards,” Cornelis said.

Researchers emphasize that the true measure of success must be improved outcomes for real patients, not test scores, benchmarks, or demonstrations.

Associate Professor Hopkins says AI has great potential, but it must be applied responsibly.

“Patients need technology that improves their care in the real world, not systems that just look impressive in research,” he says.

“With careful design, strong monitoring, and rigorous evaluation, AI has the potential to become a powerful tool for delivering safer, more equitable, and more effective care across health systems around the world,” concludes Associate Professor Hopkins.

The paper isAI can reason like a doctor. what comes next“?” by Ashley M. Hopkins and Eric Cornelis Science. Toi 10.1126/science.aeg8766

Acknowledgment: AMH holds an Emerging Leader Researcher Fellowship from the Australian National Health and Medical Research Council (APP2008119) and is grateful for support from the Tour de Cure, Australia and Flinders Foundation, and the Tour de Cure, Australia and Flinders Foundation. EC acknowledges support from the Australian National Health and Medical Research Council (APP2008119).





Source link