Achieving high-resolution data from low-resolution observations using quantum super-resolution

Machine Learning


Researchers are tackling the long-standing problem of super-resolution imaging, aiming to create detailed images from blurry, low-resolution data. Seton Hall University’s Hsin-Yi Lin, Brookhaven National Laboratory’s Huan-Hsin Tseng, and Shinjae Yoo, along with Samuelyen-Chi Chen, present the first study showing that using quantum circuits for this task may avoid the need for large datasets and intensive computations required by traditional machine learning approaches. Their innovative framework employs variational quantum circuits with adaptive nonlocal observation (ANO) measurements, allowing the quantum system to learn and improve the image reconstruction process itself. This novel design, which leverages quantum entanglement and superposition, achieves up to five times higher resolution in very small models and represents a major leap forward at the exciting intersection of quantum machine learning and image processing.

The researchers designed these ANOs to adapt during training, allowing the measurement process to effectively learn itself and refine its ability to extract high-resolution information. This adaptation is inspired by Heisenberg diagrams and treats measurement operators as trainable entities, expanding the expressive capabilities of quantum neural networks and enabling richer qubit interactions. The team specifically designed the ANO to act on multiple qubits simultaneously, making it easier to capture fine-grained correlations that are important for SR.

We employed ANO-VQC in our experiments and achieved up to 5 times higher resolution compared to existing methods while maintaining a relatively small model size. The data was encoded into a quantum system, processed through a VQC, measured using an adaptive ANO to generate an output, and then decoded to reconstruct a high-resolution image. This process was iteratively refined through training to minimize the differences between the reconstructed HR images and the ground truth. Importantly, the research team demonstrated that nonlocal observables act as effective “lenses” within quantum systems, extracting subtle details from entangled multi-qubit subspaces. This innovative method exploits the inherent dimensionality of quantum Hilbert space to achieve enhanced SR perception without the need for excessively deep circuits or large numbers of qubits. Experiments were conducted utilizing the MNIST dataset, which consisted of 28 × 28 grayscale images, downsampled to 4 × 4 pixels, and upscaled to target resolutions of 12 × 12, 16 × 16, and 20 × 20. The team learned mean squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity (SSIM), and perceptual image patch similarity (LPIPS) to rigorously evaluate the performance of the ANO-VQC model.
Results show that 3-local ANO-VQC consistently outperforms 2-local ANO-VQC, achieving lower MSE and higher PSNR and SSIM values ​​across all scaling factors, indicating more accurate pixel-level reconstruction. Specifically, in the ×3 super-resolution task, the MSE of the 2-local variant was 0.42 and 0.84, while the 3-local model reached an MSE of 0.35 and an SSIM of 0.87. Tests have proven that deeper non-local observations improve reconstruction fidelity, but the small increase in LPIPS values ​​suggests a modest perceptual trade-off, with sharp details potentially appearing more unnatural. The researchers noted that the quality of the reconstruction gradually decreased in both models as the scaling factor increased by a factor of five. This was a predictable result at higher upsampling rates.

Measurements confirm that by allowing multi-qubit Hermitian observables to adapt during training, the model can effectively explore richer subspaces of Hilbert space and extend its representation dimension without the need for deeper layers or additional qubits. This breakthrough provides a way to jointly learn how to transform and observe quantum states and optimize both variational angles and Hermitian parameters for faithful image reconstruction. This innovative design takes advantage of the vast Hilbert space of quantum systems and the representational advantages of quantum entanglement and superposition, potentially unlocking new capabilities in image processing. Experimental results reveal that ANO-VQC can improve image resolution by up to 5 times while maintaining a relatively small model size, making it a promising avenue for quantum machine learning applications.

This model effectively extends the expressive power of VQC without requiring deeper layers or increasing the number of qubits, jointly optimizing both variational angles and Hermitian parameters, and learning the optimal transformation and observation of quantum states for accurate image reconstruction. On the MNIST dataset, quantitative improvements in metrics such as MSE, PSNR, and SSIM are observed compared to models using fixed observations, confirming the effectiveness of the proposed approach. However, the authors do note a slight increase in LPIPS, suggesting that the trade-off between sharpness and perceptual realism, an adjustable balance, requires careful consideration. This discovery highlights the potential of adaptive measurement design as a resource-efficient mechanism for quantum learning models and provides a path toward more compact and powerful quantum image processing techniques. Future research will focus on extending ANO-VQC to larger quantum systems, integrating classical-quantum hybrid post-processing, and extending the methodology to more complex datasets and generative vision tasks.



Source link