Written by Joshua Roberts
It’s no secret that AI is currently very popular. According to Google Ngram, the term has been rapidly increasing in usage since around 2016. AI seems to be the norm for technology and business innovation. This also applies to medicine. Eric Topel claims that machine learning will solve every problem in healthcare today. AI will make healthcare more efficient, compassionate, and economical. Ray Kurzweil is best known for his claims about AI, predicting that we will surpass human intelligence within our lifetime. This is called “technological solutionism,” and it is the idea that all problems in organizations and society can be solved through the development of technology.
Naturally, this has led universities to explore the use of AI in all fields. Durham is no exception, producing some of the most acclaimed research in AI and machine learning. One such study is an early warning system for patients presenting to A+E departments. Matthew Watson and colleagues compared the company’s AI-powered algorithm to current paper-based algorithms used in the NHS and demonstrated how their algorithm performs better in identifying patients who require urgent clinical intervention. This paper demonstrates the technical advantages of new machine learning algorithms over standard algorithms used in hospitals.
However, the paper does not mention the social implications of using this algorithm. The impact of AI on the organization of A+E departments and the way nurses and doctors treat and interact with patients is not mentioned in the study. This is clearly not the purpose of this paper. However, it is a cause for concern. Especially after this paper appeared in a policy briefing to the Department of Health and Human Services. Social research has not focused much on the impact of AI, especially since its use has only recently become widespread.
Rather than reacting to every alarm you hear, [nurses] I learned what the different alarms meant…but the algorithm wasn’t adapting to the environment.
An incredibly interesting body of work is coming out of the Netherlands, especially Chiara Carboni. She has published important research on how the social organization of healthcare is impacted by the use of AI and machine learning. Her ethnographic research in the intensive care unit revealed how nurses became ‘adapted’ to their surroundings. Rather than reacting to every alarm they hear, they learned what different alarms mean and what is the appropriate response to them. They knew the best way to care for patients because they were attuned to the audible alarms. However, the hospital where they were admitted was looking to implement AI-powered algorithms. This algorithm was designed to increase efficiency and allow nurses to instruct patients quickly and efficiently. However, the algorithm did not adapt to the environment, and every alarm heard required immediate attention. Unlike the nurses, it was alert to everything, but ignored the nurses’ experience and dictated their every move. Instead of creating an intensive care unit, it would be grossly inefficient and nurses would be forced to respond to everything immediately when there was neither need nor possibility.
So what does this mean for the Durham University paper? The problem is that their algorithm was not compared to how nurses and doctors actually triage patients. We focused only on the paper-based algorithm they used. The premise is that nurses are fully following current results and assigning emergency care purely on the basis of algorithms. However, as we have seen, this is not how nurses actually work. It’s safe to assume that nurses in A+E departments will become familiar with the patients they see and learn how to spot urgent cases and triage them accordingly. Comparing one algorithm to another is meaningless when dealing with healthcare because healthcare is not an algorithm.
The danger of AI technological solutionism is that it will forcefully disrupt the fabric of society in medicine and other fields. AI is not a one-size-fits-all solution to inefficiency. It’s important to realize that before you can try to improve things, you need to truly understand how things actually work. Perhaps AI can be helpful, but it won’t necessarily solve everything. Society is not a collection of people making purely rational, algorithmic decisions. That’s what AI does. AI and humans are not interchangeable, and decision-making systems cannot necessarily be replaced by machine learning.
Image: Sahil Singh via Pexels

