Advances in generalizable medical AI

AI News


The patient lies on the operating table when the surgical team reaches a dead end. They can’t find an intestinal rupture. The surgeon asks aloud. An artificially intelligent medical assistant gets to work reviewing the patient’s past scans and highlighting her stream of surgical videos in real time. It alerts the team when procedural steps are skipped and reads relevant medical literature when surgeons encounter rare anatomical phenomena.

Generalist medical AI models can accomplish a wide variety of tasks within and across disciplines without being specifically trained for their assigned tasks. (Image credit: iStock/metamorworks)

Physicians across all disciplines may be able to quickly browse a patient’s entire medical file against the backdrop of all medical data and all published medical literature online, aided by artificial intelligence. . This potential versatility in clinics is made possible thanks to the latest generation of AI models.

Jure Leskovec, Professor of Computer Science at Stanford Engineering, said: “Historically, medical AI models were only able to handle very small and narrow pieces of the healthcare puzzle. doing.”

Stanford University researchers and their collaborators describe generalist medical artificial intelligence (GMAI) as a new class of medical AI models that are knowledgeable, flexible, and reusable across many medical applications and data types. doing.Their take on this progress appears in the April 12 issue Nature.

Leskovec and his collaborators show how GMAI can pull in different combinations of data from images, electronic health records, lab results, genomics, and medical texts, well beyond the capabilities of concurrency models like ChatGPT. I am recording how I interpret it. These GMAI models provide spoken descriptions, provide recommendations, draw sketches, and annotate images.

“Many inefficiencies and errors in healthcare today are caused by the overspecialization of human physicians and the slow and spotty flow of information,” said co-lead author and current MD. said Michael Moor, a postdoctoral fellow at Stanford Engineering. “The potential impact of the generalist medical AI model could be profound, as they become not only experts in their own narrow domain, but also more capable across disciplines. .”

medical without borders

Most of the more than 500 FDA-approved AI models for clinical medicine perform only one or two narrow tasks, such as scanning a chest x-ray for signs of pneumonia. However, recent advances in basic model research promise to solve more diverse and difficult challenges. “The exciting breakthrough is that our generalist medical AI models will be able to ingest different types of medical information (imaging studies, lab results, genomics data, etc.) and perform tasks we tell them to do on the fly. It’s going to be, said Leskovec.

“We expect to see a big shift in how medical AI operates,” continues Moor. “Next, instead of doing a single task, we plan to have a device that can do thousands of tasks that were not anticipated during model development.”

Authors also include Oishi Banerjee and Pranav Rajpurkar from Harvard University, Harlan Krumholz from Yale University, Zahra Shakeri Hossein Abad from University of Toronto, and Eric Topol from Scripps Research Translational Institute, GMAI working on various applications from chatbots. outlines the method. From patient interaction, note-taking, to doctor bedside decision support.

The authors propose that radiology departments can model radiological reports that visually point out abnormalities while taking into account the patient’s medical history. Radiologists can better understand their cases by chatting with the GMAI model.

In their paper, the scientists describe additional requirements and features needed to develop GMAI into a reliable technology. They point out that the model consumes all personal medical data and past medical knowledge and should only be referenced when interacting with authorized users. Next, they need to be able to converse with patients like triage nurses and doctors, gathering new evidence and data, and suggesting different treatment plans.

Concerns about future developments

In their research paper, the co-authors address the implications of a model capable of 1,000 medical assignments with the potential to learn more. “I think the biggest problem with generalist models in medicine is validation.

They point out flaws already found in the ChatGPT language model. Similarly, his AI-generated image of the Pope wearing a designer puffy coat is amusing. “But when there are high-risk scenarios and AI systems make life-or-death decisions, validation becomes very important,” Moor said.

The author continues that privacy protection is also necessary. “With models such as ChatGPT and GPT-4, this is a big deal because online communities have already identified ways to jailbreak current safeguards,” he said.

“Decoding data and social bias is also a big challenge for GMAI,” added Leskovec. GMAI models should have the ability to focus on the causative signals of a particular disease and ignore spurious signals that tend to correlate only with the outcome. Assuming that model sizes continue to grow, Moore points to early research showing that large models tend to exhibit more social bias than smaller models. “Owners and developers of such models and vendors have a responsibility to ensure that these biases are identified and addressed early, especially when introducing them into hospitals,” said Moor. .

“The current technology is very promising, but there are still many gaps,” agreed Leskovec. “The question is how to identify the currently missing pieces, such as verification of facts, understanding of bias, and explainability/justification of answers, to progress toward fully realizing the profound potential of GMAI.” Is it possible to put the community on the agenda for what?”

The paper’s co-lead author, Rajpurkar, is a PhD student in computer science at Stanford University’s School of Engineering, and co-first author, Banerjee, is a former master’s student in computer science at Stanford’s School of Engineering. Leskovec is also a member of Stanford Bio-X, a member of the Wu Tsai Neurosciences Institute, and an affiliated department of the Institute for Human-Centered Artificial Intelligence.

The study was funded by the National Institutes of Health, the Defense Advanced Research Projects Agency, GSK, Wu Tsai Neuroscience InstituteArmy Research Service, National Science Foundation, Stanford Data Science Initiative, Amazon, DoCoMo, Hitachi, Intel, JPMorgan Chase, Juniper Networks, KDDI, NEC, Toshiba. For the past three years, Krumholz has received fees and/or personal fees from his UnitedHealth, Element Science, Eyedentifeye, and F-Prime. He is the co-founder of Refactor Health and HugoHealth. Yale He is also associated with contracts from the Food and Drug Administration, Johnson & Johnson, Google, Pfizer, through Yale University, from the Centers for Medicare and Medicaid Services, through New Haven Hospital. The other authors declare no competing interests.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *