Artificial intelligence has enormous potential in medicine, but the underlying new technology also poses major threats to patient confidentiality, data privacy and the peer review process, writes Canberra GP Dr Scott Mills.
As a kid, I was interested in all things science and science fiction.
Young people’s imaginations of future utopias always seemed just a few decades away, with conscientious robots serving our every need and silvery highways of hovercars winding through the sky. .
After all, the robot finally showed up, but not in the form I had envisioned.
Far from the automaton world of tin butlers and workers, we are instead faced with a horizon of machine progress that can think on our behalf.
And like a cynical twist straight out of an Asimov novel, we are destined not as programmers commanding armies of robots, but as headless tools operated by giant artificial brains in the cloud. It is
For medical professionals, the existential crisis is perhaps more real than we like to realize.
As a discipline that has always embraced the translation of new technologies and scientific innovations into practice, machine learning and heuristic tools such as ChatGPT are sure to move into clinical use.
As physicians, we welcome workflow efficiencies, marvel at optimized risk/benefit ratios, and take pride in our professionals’ careful commitment to ethical safeguards and human validation.
But all new technology is, by definition, unprecedented.
It has a way of catching society off guard, and the changes it creates rarely turn out as intended.

Synthesis is the new search
Whether ChatGPT and other emerging large-scale linguistic AI models are seemingly innocuous is because they operate as a new form of search aggregate, and the information they provide has compounded hours of human searching and learning time. and is on the current assumption of providing an accurate review. Available evidence across hundreds or thousands of sources and millions of data points.
This is true in that ChatGPT is effectively feeding the “internet” and other material given to it into a machine learning algorithm that can accurately synthesize the best answer.
In other words, if you believe that every article ever written on a topic is true, ChatGPT will try to mimic the answers it would likely answer to questions asked about that topic.
Under best practice conditions, the precision of technology that applies a library of rigorous reference materials can also be high in the case of healthcare, which operates in a highly regulated, peer-validated information environment. Machine learning evidence-based protocols are already in development.
The exponential synergy between the advancing realm of natural language processing for deriving patient data and rule-based expert systems for interpreting them is an intuitive fit for future medical decision-making .
obscure human creativity
But understanding these AI impacts simply as labor and time savings, faster research assistants and accelerated decision-making for clinical questions misses the most important implications. .
The real wildcard danger of such technology lies not in its ability to synthesize from known information, but in its ability to manipulate information into forms that mimic new, creative human content.
This is a self-refining technology designed to maximize both cognitive logic and emotional persuasion.
It has the ability to tell stories, and it also has the ability to learn the stories we want to hear.
The future of programs such as ChatGPT is to continually refine their human-like responses to bring maximum emotional resonance to the knowledge users choose to convey.
And while the current iteration simply serves to close the directly asked question-answer loop, the vast potential scope of generative AI and autonomous AI agency is just beginning to be explored.
Speaking at the Frontiers Forum in Switzerland earlier this year, historian and philosopher Yuval Noah Harari noted that a new threshold has been crossed by allowing AI systems to shape human culture. , arguably one of the most prominent points of contention in the AI debate.
Next-generation AI language models could give birth to our next religion, Harari argues. A terrible idea, I know.
healing human face
At a fundamental level beyond pragmatism of knowledge, decision-making and responsibility, physicians serve as the human face of healing science.
We are public conduits of compassion, recognized as a profession to give recognition of suffering on behalf of society. We offer comfort, we share our sorrows, we offer grace.
We challenge, encourage and reward our patients.
We know our patients as human beings and strive to give them the best possible advice.
Some days it’s hard, some days it’s fun, but the focus of the job is on having conversations in the room.
double threat
In contrast, there are two existential risks that AI poses.
The first is the automation of subjectively authentic, human-equivalent communication and connections, not just accurate diagnosis.
For a generation raised to feel and respond through text, the future of primary care could be around-the-clock medical bots like:
- Convenient, unbiased, convincingly accurate
- Offers endless hours, focus and effortless patience.
- I learned not only how to manipulate people, but especially how to press precise buttons.
Even those who are skeptical of human imitation understand that being “good enough at most things and very good at some things” is almost always good enough for their purposes. I guess.
The second threat lies not in language, but in the shift to all aspects of human clinician activity: the use of large derived datasets to determine machine-optimized standard of care.
AI has the potential to maximize cost efficiencies, driving its application in surveillance and increasing its role in managing both billing and staff clinical activities.
At the time of writing, ChatGPT AI is being trialled in hospitals to track performance metrics such as wait times. We are one step closer to being secretly induced by clinical workers to cognitively offload scheduling tasks.
Considering that all inputs are also research data, the current testing AI can also suggest questions auditors can ask to meet regulatory requirements and make recommendations to improve KPIs. can.
In a brave new hospital run by AI, efficiency metrics and real-time monitoring, will doctors be transformed into robots performing physical tasks at a pace assigned by algorithms?
It may seem far-fetched, but history shows that technologies deployed with the best intentions do not always produce the best social outcomes.
Dr. Mills is the Canberra GP. He holds a master’s degree in public health and had a career in commercial design and marketing prior to medicine.
Statements and opinions expressed in this article reflect the views of the author and do not necessarily represent official AMA policy. MJAMore again Insight+ Unless otherwise specified.
subscribe for free Insight+ weekly newsletter here. It is open to all readers, not just registered physicians.
If you would like to submit an article for consideration, please send the Word version to: mjainsight-editor@ampco.com.au.
