Four state-of-the-art image search engines proposed to automate search and retrieval of digital histopathology slides were found to have insufficient performance for routine clinical care, new study suggests It has been.
Artificial intelligence algorithms to power histopathology image databases have performed worse than expected, with some having less than 50% accuracy, making them unsuitable for clinical practice, said Helen, an incoming third-year internal medicine resident. Dr. Shan said. He is a hematology-oncology fellow at the David Geffen School of Medicine at the University of California, Los Angeles.
“Many AI algorithms are currently being developed for medical practice, but little effort has been directed toward rigorous external validation,” said Shan, who co-led the study with Dr. Mohammad Sadegh Nasr of the University of Texas at Arlington. he said. “The field also has yet to standardize how to best test AI algorithms before clinical implementation.”
The paper will be published in the peer-reviewed journal NEJM AI.
Currently, pathologists manually search and retrieve pathological images, which is extremely time-consuming. As a result, there is increasing interest in developing automatic search and retrieval systems for digitized cancer images.
The researchers designed a series of experiments to evaluate the accuracy of search engine results on tissue and subtype search tasks on real-world UCLA cases and a larger unidentified dataset. The four engines investigated are Yottixel, SISH, RetCCL, and HSHR. Each takes a different approach to indexing, database generation, ranking, and retrieval of images.
Overall, the researchers found inconsistent results across the four algorithms. For example, Yottixel showed the best performance in breast tissue, while RetCCL showed the best performance in brain tissue. It was also found that a group of pathologists found that the search engine results were of low to average quality and had some visible errors.
Researchers are developing new guidelines to standardize clinical validation of AI tools, Xiang said. We are also developing new algorithms that leverage different types of data to develop more reliable and accurate predictions.
“Our study shows that despite the amazing advances in artificial intelligence over the past decade, significant improvements are still needed before it can be widely adopted in medicine,” Professor Xiang said. “These improvements are essential to maximizing the benefits of artificial intelligence to society while avoiding harm to patients.”
Additional authors of the study are Dr. Chace Moleta and Dr. Jitin Makker from UCLA, and Jai Prakash Veerla, Jillur Rahman Saurav, Amir Hajighasemi, Parisa Boodaghi Malidarreh, Manfred Huber, and Jacob Luber from the University of Texas. Arlington.
This research was funded by the University of Texas System Rising Star Award and the CPRIT First-Time Faculty Award.
/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.
