
Visual comparison of PnP-ADMM's match, mismatch, and adaptive (proposed) deep learning models for the phase retrieval problem. The mismatch model is trained on pathology images rather than faces (match). The proposed method applies domain adaptation to the mismatch model to recover high-quality images comparable in quality to results achieved using the match model, with less than 1% of the number of images required for training. Credit: Kamilov Lab
Deep learning models, such as those used in medical imaging to help detect diseases and abnormalities, must be trained with large amounts of data, but often there isn't enough data to train these models, or the data is too diverse.
Ulugbek Kamilov, associate professor of computer science and engineering and electrical and systems engineering at the McKelvey School of Engineering at Washington University in St. Louis, along with doctoral students Shirin Shoushtari, Jiamin Liu and Edward Chandler in his research group, developed a way to get around this common problem in image reconstruction.
The research team plans to present their findings this month at the International Conference on Machine Learning (ICML 2024) in Vienna, Austria.
For example, the MRI data used to train a deep learning model may come from different vendors, hospitals, equipment, patients, or body parts imaged. Applying a model trained on one type of data to other data can introduce errors. To avoid these errors, the team employed a widely used deep learning approach called plug-and-play priors, which accounts for changes in the data used to train the model and adapts the model to new incoming data sets.
“Our method doesn't require a huge amount of training data,” Shoushtari says. “Our method allows us to adapt our deep learning models using a small amount of training data, no matter which hospital, which machine, or which part of the body the images come from.”
“The key to a domain adaptation strategy is that it can reduce errors in imaging that are caused by limited data sets,” Shoushtari says. “This makes it possible to apply deep learning to problems that were previously not possible due to data requirements.”
One proposed use of this method is to obtain data from MRIs, which require patients to lie still for long periods of time: any movement by the patient introduces errors.
“We looked to get data from MRIs in a shorter time,” says Shoushtari, “Typically, shorter scan times result in lower image quality, but with our method we can computationally improve the quality of the images, as if the patient had been in the machine for a longer period of time. A key innovation of our new approach is that we only need a few dozen images to fit existing MRI models to new data.”
The method has applications beyond radiology, and the team is working with colleagues to adapt it for scientific imaging, microscopy imaging and other applications where data can be represented as images.
For more information:
Shoushtari S, Liu J, Chandler EP, Asif MS, Kamilov US. A priori discrepancies and adaptation in PnP-ADMM via non-convex convergence analysis. International Conference on Machine Learning, 21-27 July 2024. icml.cc/virtual/2024/poster/34765
The source code is available on GitHub: github.com/wustl-cig/MMPnPADMM
Courtesy of Washington University in St. Louis
Quote: Deep Learning Models Can Be Trained With Limited Data: New Method Can Reduce Errors in Computational Imaging (July 26, 2024) Retrieved July 27, 2024 from https://techxplore.com/news/2024-07-deep-limited-method-errors-imaging.html
This document is subject to copyright. It may not be reproduced without written permission, except for fair dealing for the purposes of personal study or research. The content is provided for informational purposes only.