Looking ahead to 2026, AI will become more integrated into clinicians' workflows. Advances are being made in areas such as AI writing and computer vision in radiology. But concerns exist not only about addressing threats to doctors' jobs, but also about maintaining guardrails, privacy, and security.
Just as EHR implementation has been a multi-year effort to create value, AI scribes are now starting to deliver real benefits, according to Ashish Atreja, MD, president of generative AI (genAI) health platform GenServe.AI and former CIO and chief digital health officer at UC Davis Health.
Atreya says mature AI tools for documentation and computer vision in radiology are poised to deliver value. However, research suggests that more research is needed in unproven areas such as clinical decision making. Medical service research and management epidemiology. From another study, missouri medicinepeople pointed out that AI decisions are difficult to understand.
“For unproven cases, the journey is tougher and longer,” Atreya said.
Of course, concerns about the use of medical AI go back further than 2026. A 2023 Pew Research poll found that 60% of Americans are uncomfortable with clinicians relying on AI for diagnosis and treatment.
But AI is here to stay and is being incorporated into clinical workflows. But concerns remain. health tech analysis We spoke to experts to find out what clinicians are thinking about when it comes to using medical AI in the new year.
lack of clinical supervision
Jonathan Cron, CEO of BloodGPT, a platform that uses AI to interpret blood test results, noted that the main concern with AI in healthcare is the lack of proper clinical oversight. Kron said it's essential that clinicians oversee the work of AI because it can be “persuasive” even when it's wrong.
“Having it work without clinician approval or operation increases the risk,” Kron said.
The rapid pace of large-scale language model (LLM) development also poses challenges for clinicians seeking to establish oversight over AI, explained Holly Wiberg, Ph.D., assistant professor of operations research and public policy at Carnegie Mellon University's Heinz School of Information Systems and Public Policy.
According to Wiberg, AI tools should be part of a more flexible, continuous monitoring strategy rather than a one-time evaluation, especially given the dynamic nature of genAI tools.
“LLM-based systems undergo frequent model revisions and require continuous monitoring to ensure stability, safety, and regulatory compliance,” Wiberg said.
She recommended that health systems automate regular monitoring of model performance.
Atreya also raised concerns about the lack of structure to monitor AI algorithms. To create this structure, Atreya recommended assessing whether the model is private, secure, ethical, and compliant.
Additionally, he advised determining whether the model is generalizable to the population. Generalizable AI models work accurately and reliably across different groups and settings.
“But after implementation, the model fluctuates based on the population and based on how the model evolves,” Atreya explained. “Therefore, continuous monitoring is required, and most organizations do not have a way to continuously monitor these models.”
By continuously monitoring AI algorithms, he suggested, clinicians can guard against safety risks that arise from a decline in model accuracy as the population changes.
“That could potentially lead to wrong decisions,” Atreya said.
He also highlighted that the lack of relevant data can reduce the accuracy of AI scribes.
“Even AI scribes have experienced that many times they have to correct what has been written, but if they just ignore it and don't correct it, incorrect information can carry over into their notes,” Atreya said.
Additionally, patients are adopting AI tools faster than healthcare systems, so the lack of vetted information poses challenges. Wiberg noted that patients are increasingly relying on tools like ChatGPT to research symptoms and seek medical advice.
“Relatively slow adoption of the health care system, in contrast to rapid patient adoption, has created a gap in patient seeking health information,” she said. “Health systems need to grapple with how to balance well-placed organizational vigilance with the opportunity to provide vetted information to meet patient needs.”
Data rights and transparency
Cron said blindly incorporating data into AI tools can lead to issues such as data rights and a lack of transparency.
He advised technology vendors to use appropriately anonymized and aggregated data to ensure AI systems are secure and fair.
He also said companies should not use personally identifiable protected health information (PHI) in AI tools unless they have “explicit consent” from patients and the purpose is clear.
Additionally, Kron said organizations should not keep PHI in production environments and that “managed anonymized learning” data should provide insight and value to the original data owner. He recommended clear rules, short-term retention, and auditing.
Additionally, Kron recommended “nutrition labels” to maintain transparency in how AI tools are implemented.
“If a vendor cannot provide a simple 'nutrition label' that shows what data they are using, what PHI is (and should not be) retained, how anonymization is done, when the model was last modified, known failure modes, and basic asset checks, we should treat them as not ready for care,” Kron said. “If there is no label, there is no deployment.”
Too many AI models in healthcare
Atreya said the deluge of AI models, agents and solutions in healthcare is creating “a plethora of problems.”
“It's creating a very fragmented market, and that's creating decision paralysis,” he said. “People don’t know which ones to double up on.”
Atreja said companies like AWS, Google, Microsoft and OpenAI are offering “universal solutions” that are not “fit for purpose” with healthcare or EHR records and data.
“There's a lot of work to be done to actually create value from it, from what's being built and enabled with AI in general to what we need in a trusted way within the healthcare system using our workflows,” Atreya said.
“As you can imagine, you have dozens to hundreds of AI algorithms running within your organization, and you don't need to log into and monitor each of those applications individually,” he continued. “You need a centralized system to do that, and that's a big gap that's not being filled.”
Adapted to medical workflows
Kron said the proliferation of AI assistants can lead to fragmented workflows, and the healthcare industry is concerned about AI tools that don't fit seamlessly into daily operations.
“Clinicians don't want to jump back and forth between multiple apps,” Kron said. “They want fewer steps before they can move on.”
To solve this problem, Kron recommended choosing AI tools that fit within a health system's workflow and enable actions such as ordering, scheduling, and coverage checks with clinician oversight.
“When AI is outside of the EHR/lab/payer rail, there are more clicks and errors,” Kron said.
Additionally, Wiberg explained that complex, unstructured notes made up of live audio from patient visits complicate the integration of tools like AI scribes into clinical workflows.
Completed notes from patient visits often consist of “free text, unstructured output,” she added.
According to Wiberg, this unstructured data is input into AI models, and the raw audio and free-flowing text involve many stakeholders, which can make developing metrics within complex workflows difficult.
“Model outputs rarely operate in isolation,” says Wiberg. “Instead, they are used across a variety of downstream tasks, from claim preparation and medication reconciliation to serving as a reference for future visits by originating providers and referred specialists.”
Structured evaluation becomes difficult in healthcare systems as the outputs of AI models are reused and reused in complex systems.
“We're interested not only in how the model performs on its own, but also in tracking its downstream effects,” Wiberg said.
As health systems integrate AI tools into their workflows, they must also decide whether to develop them in-house or use products from outside vendors, according to Wiberg. Health systems that choose to develop AI tools in-house must understand that they will require significant investments in staff time, training, and monitoring, as well as the financial costs of training models.
“Fine-tuning a model can be quite expensive,” she says. “These high resource needs make obtaining third-party tools very attractive as an alternative.”
“However, third-party tools introduce new security and surveillance vulnerabilities,” she continued. “As custodians of patient data, health systems must ensure that vendors use the data responsibly, including that the data is not used for training commercial LLMs.”
Prepare for safe and effective healthcare AI in 2026
Kron said health systems should create an “approved use/vendor” list to increase trust in AI apps whose models and plug-ins change rapidly. This list will be part of the AI official collection that outlines what use cases are allowed and where AI tools can be run.
“Keep it short and update it regularly,” Kron suggested.
To ensure the safe use of AI in the medical field, he also recommended continuing to monitor automation.
“This means no records are changed or treatment is initiated without the clinician's approval,” Kron said.
In addition, organizations need to train not only clinical faculty on how to use AI responsibly, but also executives on the best use cases that medical AI can impact, Atreja said.
Atreja recommends a program called AIM-AHEAD All of Us to train researchers in AI and machine learning (ML). Meanwhile, the American Board of Artificial Intelligence in Medicine (ABAIM) offers medical AI education to clinicians pursuing board certification.
AI programs may be becoming more efficient, but to ensure safe and effective use of AI in 2026, health systems should not let their guard down when it comes to trust and oversight.
“Intuitiveness should not be confused with good looks or precision,” Atreya says. “So we have to maintain vigilance in terms of accuracy and reliability and supervise them to make sure they don't make mistakes.”
“I think they can add value as long as they are used carefully and with oversight,” he added.
Brian T. Horowitz began covering health IT news in 2010 and covered the entire technology beat in 1996.
