Artificial intelligence (AI) seems to be everywhere these days, and healthcare is no exception.
author
-
Karin Varspool
Dean, Department of Computing Technology, RMIT University, RMIT University
-
David Hansen
CEO, CSIRO, Australian Center for e-Health Research
-
Enrico Coiera
Professor of Medical Informatics, Macquarie University
Similar to professional dermatologists, there are also computer vision tools that can detect suspicious skin lesions. Other tools can predict coronary artery disease from scans. There are also data-driven robots that guide minimally invasive surgery.
Use AI to analyze patient genomic and molecular data to accurately diagnose disease and guide treatment choices. For example, machine learning has been applied to detect Alzheimer’s disease and to select the best antidepressants for patients with major depression.
We use deep learning techniques to model electronic medical record data to predict patient health outcomes and provide early estimates of treatment costs.
New language-based generative AI technologies like ChatGPT enable the clinical world to answer patient questions, help doctors take better notes, and even explain diagnoses to their worried grandchildren. There is a lot of talk about chatbots for doing things.
There is no doubt that AI will benefit the healthcare system in terms of patient health, workflow and system efficiency.
However, there are legitimate concerns about the accuracy of such tools. For example, how well does it work in a new environment (such as a country or hospital different from the one in which the tool was created), and whether it “hallucinates” or can be fabricated.
Development of “medical grade” tools
A recent article in the Medical Journal of Australia argued that effective use of AI in healthcare will require reskilling the workforce, reshaping healthcare services and transforming workflows. .
Importantly, we also need to collect evidence that the AI tool is “medical grade” before using it on a patient.
Many of the claims made by medical AI developers may lack appropriate scientific rigor, and evaluation of AI tools can be at high risk of bias. This means tests are performed to make sure the accuracy is too narrow.
AI tools can error out or stop working when the application context changes. Conversational agents such as chatbots can generate misleading medical information and delay patient care. It may also make inappropriate recommendations.
All of this means we need a standard for AI tools that impact patient diagnosis and treatment. Clinicians need training on how to critically evaluate AI applications to understand their readiness for routine practice.
You should expect to be able to reproduce your results from one context to another under real-world conditions. For example, a tool developed using historical data from New York hospitals should be carefully tried using real patient data from Bloom before trusting it.
Randomized controlled trials of AI tools in which these differences are controlled will be the gold standard of evidence for the use of AI tools.
You can’t just imitate what other countries are doing
It is important to carefully consider how AI tools are being incorporated into workflows to support clinical decision making. The benefits and risks of a tool are determined by how the human clinician and the tool work together.
There is a view that all we need to do in Australia is adopt the best of what is produced internationally, and we don’t need deep sovereign capacity.
Perhaps we can rely on the ongoing regulation of AI tools through the European Union’s AI legislation, or the US Food and Drug Administration’s process of evaluating software as a medical device.
Nothing could be further from the truth.
AI requires local customization to support local practices and reflect differences in diverse populations and healthcare services. We don’t want to simply export a clinical dataset and re-import a model built using it without adapting to the context and workflow. We need to monitor clinical adoption of AI tools.
Without some degree of algorithmic sovereignty (the ability to generate or modify AI in Australia), the country will be exposed to new risks and the benefits of the technology will be limited.
A roadmap for AI in healthcare in Australia
The Australian Alliance for Artificial Intelligence in Healthcare has created a roadmap for future development.
It identifies gaps in Australia’s capacity to transform AI into effective and safe clinical services and provides guidance on key issues such as workforce, industry capacity, implementation, regulation and cybersecurity.
These recommendations pave the way for an AI-enabled Australian healthcare system that can safely and ethically deliver personalized, patient-centred care.
The plan also envisions a vibrant AI industry sector where an AI-aware workforce and AI-savvy consumers will work together to create jobs and export them to the world.
AI has the potential to transform healthcare. This can be achieved by harnessing computational power to identify subtle patterns in complex data across biology, images, sensory data, experiential data, and more.
Care and strategic investments ensure that AI innovation benefits both clinicians and patients. Now is the time to act to ensure Australia is well positioned to benefit from one of the most important industrial revolutions of our time.
![]()
Karin Verspoor is funded by NHMRC and ARC. She serves on the board of directors of her BioGrid Australia. Karin is a co-founder of the Australian Alliance for Artificial Intelligence in Healthcare, along with the other authors of this article.
David Hansen is funded by NHMRC. David is a board member of the Australian Institute of Digital Health. David is a co-founder of the Australian Alliance for Artificial Intelligence in Healthcare, along with the other authors of this article.
Enrico Coiera is funded by NHMRC. Enrico is a shareholder and board member of digital health company Evidentli. Enrico is a co-founder of the Australian Alliance for Artificial Intelligence in Healthcare, along with the other authors of this article.
/ Courtesy of The Conversation. This material from the original organization/author may be of the nature of its time and has been edited for clarity, style and length. Mirage.News does not take any organizational positions or positions and all views, positions and conclusions expressed herein are solely those of the authors.
