Quantum neural networks are a promising avenue for short-term machine learning, but their potential is currently limited by the sheer number of required parameters and associated computational challenges. Haijian Shao, Bowen Yang, and Wei Liu of Jiangsu University of Science and Technology, along with Yingtao Jiang of the University of Nevada, Las Vegas, and others are tackling this problem with LiePrune, a new framework that dramatically simplifies these networks. LiePrune uses a unique combination of Lie group theory and geometric principles to identify and remove redundant parameters in a principled manner, achieving significant compression without sacrificing performance. The researchers demonstrate that the method not only aggressively compresses networks, but also provides provable guarantees on redundancy detection, function approximation, and computational efficiency, representing a major step toward practical and scalable quantum machine learning.
Short-term quantum machine learning faces scalability limitations due to excessive parameters, sterile plateaus, and hardware constraints. In this work, we present LiePrune, a mathematically grounded one-shot structured pruning framework for quantum neural networks and parameterized quantum circuits that exploits Lie group structure and quantum geometric information. This method jointly represents each gate in the Lie group, Lie algebraic dual space, and quantum geometric feature space, facilitating principled redundancy detection and aggressive compression. Experiments conducted on quantum classification tasks using the MNIST and FashionMNIST datasets and quantum chemistry simulations of the LiH variational quantum eigensolver (VQE) demonstrated that LiePrune achieves more than 10x compression.
LiePrune compresses quantum networks with minimal loss
Scientists have developed LiePrune, a new framework for compressing quantum neural networks and parameterized quantum circuits, significantly reducing the number of parameters required for operation. Experiments demonstrate that LiePrune can compress models by a factor of 8-10 with minimal accuracy loss on classification tasks such as MNIST and FashionMNIST. Specifically, on the MNIST 4-vs-9 dataset, the team reduced the parameters from 288 to 36 while maintaining 95.9% of the original accuracy after fine-tuning. Similar results were observed in the fashion sandals vs boots dataset, where the parameters were compressed from 360 to 36, reaching 74.
Accuracy after fine tuning is 0%. The research team also investigated LiePrune's performance on the LiH variational quantum eigensolver (VQE) problem, a quantum chemistry task, using a 12-qubit, 12-layer analysis. LiePrune achieved a 12x compression, reducing the parameters from 432 to 36, but this aggressive compression significantly increased the energy deviation, which initially worsened from -7. -3 from 5225 hectares. 7416 Ha.
Subsequent fine-tuning partially restored the ground state energy to -4. 2,875 hectares, but a difference of 3.23 hectares remained. Further analysis revealed that mild compression levels resulted in minimal energy deviations that were fully recoverable with fine-tuning, while aggressive compression resulted in significant errors. These results indicate that LiePrune effectively compresses quantum models for classification tasks, but chemically structured Hamiltonians are more sensitive to strong pruning and require specialized strategies to maintain accuracy.
LiePrune enables scalable quantum circuit compression
LiePrune represents a major advance in the development of practical quantum neural networks and parameterized quantum circuits. The researchers created a mathematically based framework to efficiently prune these circuits, addressing important scalability limitations due to excessive parameters and computational demands. This method exploits the Lie group structure underlying quantum circuits and enables aggressive compression while preserving functionality. Experiments across a variety of tasks, including image classification and quantum chemical simulations, demonstrate that LiePrune achieves significant parameter reductions of more than 8-12x with minimal or improved performance. This work results from a new approach to redundancy detection that represents each gate in a dual Lie group Lie algebraic space and a geometric feature space. However, chemically structured Hamiltonians are more sensitive to compression than the classification benchmarks tested, suggesting that further improvements, such as chemistry-aware constraints, are needed to fully realize the benefits of LiePrune in this area.
👉 More information
đź—ž LiePrune: Lie groups and quantum geometry dual representation for one-shot structured pruning of quantum neural networks.
đź§ ArXiv: https://arxiv.org/abs/2512.09469
