As AI systems become more “pervasive,” risks and opportunities increase.Digital leaders need to put humans first at every stage of design and implementation, writes Simon Noel
Working in digital for the past 20 years, I have interacted with many different types of clinician decision support systems. Most of these are at a basic level, using algorithmic rules to use discrete data to provide guidance to the user based on a specific scenario.
The system includes real-time guidance within critical care workflows, guidance to facilitate appropriate blood transfusions, targeted assessment and care plan admission rules, and VTE (venous thromboembolism) risk alerts. I was there. The common thread is that despite providing real-time clinical support, they require timely, accurate data and proper coding, not AI. However, the rules of the numerical algorithms for these systems are relatively simple. True AI and machine learning are much more complex and can be far-reaching as their sphere of influence evolves over time.
AI is making suggestions to me as I write this article. It'll show up on your phone and in recommendations for things to watch on your streaming service. These systems learn from what I do every day and do not adhere to a pre-configured set of individual data.
ChatGPT and Googles Bard provide a voice for AI and are useful toys. But the appropriateness of these platforms in frontline medicine, especially role-specific guidance, especially given that the free version of his ChatGPT dataset is from early 2022 and medical advances are rapid. I have doubts. You can also read about last year's high-profile conference at Bletchley Park, which looked at the general safety of AI as its use expands and impacts every aspect of our lives. Don't forget.
Make sense of your data
AI has great potential to understand the complex environments in which we work. However, we also work in a technology environment where AI is not fully considered in the deployment of many systems, which can be challenging. There can be many obstacles to accessing and understanding the data at hand.
For example, an application can use AI-based natural language processing to analyze unstructured text in clinical notes and further analyze the extracted data using AI. What are the results of this complex data processing? We need to ensure that the reliability of the results is not compromised by bias or processing failures. This will also help determine how to profile future digital system deployments.
The location of the data and how it impacts our activities must be understood at the beginning of design and implementation. This is parallel to the fact that the introduction of the system has both negative and positive impacts on our working environment, and the impact it has on us professionally through our skills, abilities and professional identities. You need to do.
Accessible when you need it
Clinical systems are not always deployed primarily for data extraction or extraction. Considering expanding data availability and increasing system capabilities, system design that takes data use and effective data collection into account should be the gold standard.
We also need to be aware of issues that impact accessibility and use, such as health inequalities, service users and staff, and how we can prepare our workforce for the future. National guidance such as 'What Good Looks Like' emphasizes the importance of supporting staff and service users. But it's important to get the basics right so you can achieve accessibility in your clinical digital environment when you need it. Hardware and basic infrastructure shouldn't get in the way.
Informatics and AI cannot be separated, but users need to understand the range of factors that influence healthcare technology and feel comfortable using the technology. AI needs to join this recognition of opportunities as well as risks.
It requires a comprehensive, structured approach to system design, construction, deployment, user engagement, and training. This allows for optimal collection of the right data and reduces design and documentation burden. Users must also be provided with the skills to effectively use and understand digital and data. If we can't do that, we need to make it easy to do the right thing and not see technology as a barrier.
Every data point has a face
Inaccurate data and AI have the greatest impact on patients and staff at the point of treatment or failure to treat. This may be the result of point-of-care teaching, the influence of broader organizational processes, or research affecting subgroups of patients. This must be kept in mind at every stage of medical record design, implementation, and use. These data are representative of our staff and service users, so the systems we develop must be effective, accessible, inclusive and unbiased.
Simon Noel is CNIO at Oxford University Hospitals NHS Foundation Trust. He is also the Chair of his CNIO Advisory Board for Digital Health.
