Machine learning predicts quantum linear solver parameters and achieves 2.6x optimization speed

Machine Learning


Variationally linear solvers offer a potential means of solving complex equations using emerging quantum computers, but challenges in preparing quantum systems often reduce their effectiveness as problem size increases. Youla Yang and colleagues at Indiana University Bloomington addressed this limitation by introducing PVLS, a new method for predicting optimal starting parameters for these solvers. This method uses graph neural networks to analyze the structure of equations and generate initial settings that significantly improve the speed and reliability of quantum computations. Results show that PVLS accelerates optimization of small to medium-sized systems by up to 2.6 times, paving the way for more practical quantum algorithms in the near future.

PVLS learns from the structure of a linear system to predict effective initial parameters for a quantum circuit, providing a better starting point for optimization. Extensive testing has demonstrated that PVLS significantly outperforms traditional initialization techniques such as random initialization, principal component analysis, and minimum norm strategies across a wide range of test cases.

The team evaluated PVLS on more than 15,000 synthetic systems and 10 real-world sparse matrices ranging from 24 to 210 dimensions. Results show that VQLS initialized with PVLS converges in significantly fewer iterations, reducing the number of optimization steps by more than 60% on average, corresponding to a 2.6x speedup in total training time. Despite the minimal inference overhead of approximately 2 ms per instance, the overall efficiency gains demonstrate the potential of PVLS for practical applications. This study addresses a critical challenge in the field, namely the difficulty in training VQLS due to sterile plateaus and inefficient parameter initialization. The team's breakthrough involves using graph neural networks (GNNs) to exploit structural information embedded within the coefficient matrix of a linear system to predict optimal initial parameters for VQLS circuits. Experiments conducted with matrix sizes ranging from 16 to 1024 show that PVLS achieves up to 2.

Optimization is 6x faster, reducing the number of iterations required to reach a solution while maintaining accuracy comparable to existing methods. The results show that PVLS reduces initial cost by 81.3% and final loss by 71% on average compared to random initialization. The team also evaluated PVLS on 10 real-world sparse matrices to confirm its ability to generalize to practical problems and remain robust. The inference overhead of PVLS is minimal at approximately 2 ms per instance, highlighting its potential for efficient implementation. Through extensive testing across small to medium-sized systems, the team demonstrated that PVLS consistently improves both stability and convergence speed compared to commonly used initialization techniques. The results show that PVLS can achieve a speedup of up to 2.

Six optimizations result in fewer iterations to reach a solution while maintaining comparable accuracy. This improvement comes from the method's ability to generate valid parameter seeds, leading to a significant reduction in overall execution time (estimated at 62.5%) compared to random initialization. Although these efficiency gains are currently based on classical simulations, they suggest a strong potential for real-world time savings in hybrid quantum-classical workflows.

👉 More information
🗞 PVLS: A learning-based parameter prediction method for variational quantum linear solvers
🧠ArXiv: https://arxiv.org/abs/2512.04909



Source link