10 Best Practices for Implementing AI in Healthcare

Applications of AI


Healthcare's interest in artificial intelligence shows no clear signs of slowing down. Major industry events create a stable flow of AI products and partnerships, particularly for common use cases such as clinical documentation, process automation, and data aggregation.

The majority of healthcare systems use AI, according to AI Adoption and Healthcare Report 2024, which was released late last year by the Healthcare Information and Management Systems Society, in collaboration with Medscape. A HIMSS survey of IT and healthcare professionals found that 86% of hospitals and healthcare systems used AI, and 43% used the technology for at least a year. This is a significant increase from the 2022 American Hospital Association survey, indicating that only 19% of hospitals used AI.

Amidst the excitement about the future of technology, there are concerns about implementing AI in healthcare. “There are high bars to meet basic management of business process needs,” says Dr. Michael E. Matheney, an internal medicine physician and professor during practice at Vanderbilt University. “It is important for us all to consider using AI in a carefully measured way to respect the need to support our patients and our community.”

Dr. Sauraba Batnagar, a professor at Harvard University, said Dr. Sauraba Batnagar, from the School of Medicine at Harvard University, may not know what AI implementation looks like Medicine trends article. They might assume that it is a problem with getting ready-made software, he pointed out, or they could inadvertently concentrate on pilots with significant upfront costs.

To overcome the hump, hospitals and health systems need to think carefully about the AI systems they want to implement, how to validate and monitor AI models, and who “own” the AI use cases. Although every organization's needs vary, there are 10 steps to implementing AI in healthcare.

1. Map AI governance

The ideal AI governance structure brings expertise from IT, data science, C suite and bioethics. The governance team seeks suggestions from business owners, from nursing leaders to specialized chairs and operational managers. The proposal could come in many ways, said Dr. Brian Anderson, CEO of the Health AI (CHAI) Coalition. Some suggest using specific AI tools to solve problems, while others identify other issues and other issues that AI can solve.

2. Define goals and set expectations

The initial goal of the governance team when evaluating proposals is to ensure that AI is appropriate. In certain cases, the logic may be sufficient for rule-based purposes. If AI is the answer, Anderson says the next step is to determine which type of AI is most appropriate: content creation generation AI, deep neural networks with pattern recognition, or traditional AI for data analysis. At this point, it's also time to deal with old build and purchasing questions. Anderson may be able to fine-tune existing basic models for health systems with in-house data science expertise, but organizations are likely to need to purchase AI technology if they lack expertise or if the solutions set up for the problem are complex.

3. I'll go to the market

For health systems that are considering purchasing, Anderson recommended a formal, open proposal request. He pointed to the Chai registry of AI models as a starting point for evaluating vendors and making informed decisions. He added that business units are part of the vendor's evaluation and that special attention must be paid to how well AI products align with the unique workflow of the business units. Procurement is a critical time to build relationships with vendors, Anderson said governance and performance monitoring to facilitate the line.

A graphic showing a list of 10 best practices for AI implementation in healthcare
Follow these 10 best practices when deploying AI in hospitals and health organizations.

4. Ensure data privacy

Matheny said there is controversy over the use of private patient and business data in AI models. The algorithm requires sufficient training data, but anonymous data reduces the utility. Although existing basic models are widely available, they share data with commercial owners. To address this issue, Anderson recommended that the underlying model be deployed locally so that organizations can fine-tune the model with their own datasets behind the firewall.

5. Explore use cases with simple victory

According to a report by HIMSS, the most common use cases of AI in healthcare enhance management tasks such as memo and meeting transcription, electronic health records (EHR) and medical literature reviews, and analysis of imaging studies. At Harvard Medicine trends article, Bhatnagar has shown that AI tools may be particularly useful for organizations moving towards value-based care models. With so many variables to monitor, AI can automatically aggregate data to help analyse, allowing health system leaders to spend more time improving results.

6. Seek stakeholder feedback

Both AI governance groups and business owners need to meet regularly to discuss how AI tools are used, Matheny said. Feedback on performance, technical accuracy and clinical efficacy are important to determine the value of an AI project. This continuous review should be part of the organization's culture and structure, Matheny added, “The need to assess will not decrease, especially as the breadth and depth of AI increases.”

7. Follow ethical standards

To summarise the ethical guidelines for AI from the US and Europe, Harvard Business Review The analysis suggested several principles for implementing reliable AI in healthcare. Simply put, AI systems must be safe, algorithms must be unbiased, consider the needs of vulnerable subpopulations, and have the option to be notified and opt-out when AI is in use. But bias can be difficult, Anderson noted. Some models are constructed with “legitimate bias.” This is because research has shown, including a breast cancer risk model in middle-aged black women, is more likely to be diagnosed with a more aggressive form of cancer and is older. “The performance of these models should be monitored continuously for intended use and for other populations due to unfair bias,” advised Anderson.

8. Validate and monitor the model

Validation of AI models' performance and the tools that use them should include feedback from clinical and technical stakeholders. Organizations need to ensure that their products are implemented properly, models are ingesting the right data, and models' output meet expectations, Matheny said. He added that if there is a lack of in-house expertise, external validation of the model may be required. At the same time, changing the method of measuring the model's algorithm may not be successfully converted to another institution's testing environment, making it possible for more sophisticated models to be suitable for internal testing.

9. Provides training and support

To help clinical users determine whether they need to use AI tools, Anderson suggested two questions: For administrative and operational users, questions should focus on the process rather than the patient. Organizations should encourage users to think critically and ask questions, Anderson suggested. Transparency is important. Users need to access the model and know how to train the model.

10. Understand the limitations of AI tools

When a medical professional encounters mature technology with rigorous verification, it is unlikely to question production. Matheny pointed to blood test results with well-established and accepted benchmarks. The same cannot be said with many AI tools, so end users need to be reminded that AI is prone to mistakes. “It might be wrong at times and you might need to disable it,” Mateny said. You also need to consider the consequences of what's wrong. An AI system that books two appointments at 11:30 is very different from the AI system that cuts off a patient's ventilator early.

Graphics showing AI applications in healthcare
AI applications are concrete and broad in healthcare.

Don't repeat history

According to HIMSS, implementing AI in healthcare has increased productivity and reduced staff time, but costs have not yet been reduced. Cost issues are partly due to the early stages of AI technology. Investing in your initial AI infrastructure can be expensive and it can take time to calculate and realize ROI.

As adoption accelerates, organizations are wise to avoid mistakes in implementing EHR, Drs. Christian Rose and Jonathan H. Chen of Stanford University wrote for the British Weekly Scientific Journal Nature. The negative impact of poor ease of use made me wonder if the EHR system was worth the trouble. To avoid that fate for AI systems.”[t]He wrote that “Rose and Chen wrote” the effective integration of AI in medicine.

Therefore, organizations should emphasize the importance of governance, take a practical approach to AI implementations that routinely seek stakeholder feedback, and continuously monitor AI tools for performance and accuracy.

Brian Eastwood is a freelance writer in the Boston area who has covered healthcare for over 15 years. He also has experience as a research analyst and content strategist.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *