Quantum machine learning improves accuracy despite increased circuit complexity

Machine Learning


Researchers at the University of Sharjah, in collaboration with New York University Abu Dhabi and the NYUAD Institute, conducted a detailed investigation of the scaling behavior of hybrid quantum neural networks, providing valuable insights into optimizing performance as computational complexity increases. Danil Vyskubov and colleagues systematically investigated the effects of both the number of quantum layers and the number of qubits on accuracy and fundamental quantum behavior. A controlled scaling study conducted across multiple datasets revealed significant scaling trends and saturation points. It provides practical advice for optimizing hybrid quantum-classical classifiers. The study also provides a standardized evaluation protocol, an important step toward understanding and improving the capabilities of quantum machine learning.

Persistent entanglement increase favors wider quantum circuits over depth increase

Hybrid quantum-classical neural networks are a promising avenue for machine learning that leverages the potential of quantum computing to enhance classical algorithms. However, understanding how these networks scale with increasing resources, especially the number of qubits and the depth of quantum circuits, is critical for practical implementation. Previous research has often been hampered by a lack of systematic studies controlling for these variables, resulting in inconsistent results and difficulty in drawing generalizable conclusions. In this study, increasing the number of quantum layers, Lwith a fixed number of qubits, Qand increasing the number of qubits, we get Qat a certain depth, L. The team used multiple datasets to ensure the robustness of their findings and identify dataset-dependent behaviors.

We found that entanglement, a key quantum resource, consistently increases by up to 35% as the number of qubits increases. This level of sustained growth is important because previous optimization challenges at fixed circuit depths have limited our ability to reliably enhance entanglement. Increasing the number of qubits in a quantum circuit clearly increases quantum expressivity, the circuit’s ability to represent a wide range of functions, and entanglement. In contrast, increasing the circuit depth, i.e. increasing the number of sequential quantum operations, exhibits data set-dependent performance saturation and optimization instability. The team systematically varied the depth and width of the circuit while maintaining a consistent training budget across three benchmark image datasets and found that performance plateaus were common with increasing layers, but not with increasing qubits. This suggests that for many applications, prioritizing qubit count over circuit depth may be a more effective strategy to improve performance.

The F1 score evaluation of prediction performance showed different trends as the number of qubits increased, depending on the number of quantum layers. This highlights the interaction between circuit architecture and dataset characteristics. Quantum circuit expressivity (QCE), a metric that quantifies the diversity of functions that a quantum circuit can represent, and entanglement entropy estimation (EEE), a measure of entanglement within a quantum state, both vary with these performance trends, revealing dataset-dependent scaling regions and saturation points. The correlation between these quantum properties and predictive performance provides valuable insight into the underlying mechanisms driving the observed behavior. A consistent evaluation protocol is now available and provides guidance for selecting the circuit width and depth of a hybrid quantum-classical classifier to improve performance. The protocol details the specific datasets used, the range of qubit numbers and layer depths investigated, and the metrics used for evaluation, allowing other researchers to reproduce and extend these findings.

It remains unclear why certain datasets respond differently to changes in circuit depth, and further investigation is required to pinpoint the reasons for these variations. It is hypothesized that the inherent structure and complexity of each dataset influences its susceptibility to the limitations imposed by increasing circuit depth. Datasets with more complex features may benefit more from the expressivity gains brought about by wider circuits, whereas simpler datasets may be more prone to overfitting with deeper circuits. A clear way to evaluate and compare hybrid quantum-classical neural networks (QNNs) is key to accelerating progress in quantum machine learning. This systematic investigation reveals how expanding either the computational power or complexity of hybrid quantum-classical networks affects performance, and allows us to evaluate the impact of different qubit numbers and circuit depths. The impact goes beyond improved performance. A deeper understanding of scaling behavior is important for determining the feasibility of deploying QNNs in short-term quantum hardware, where qubit counts and circuit depth remain limited.

The independently varying number of qubits and number of sequential quantum operations established a standardized method for evaluating these models. This approach allows for a more nuanced understanding of the trade-off between circuit complexity and performance. Measures of quantum properties such as interconnectivity between qubits, entanglement, and representability consistently improve with increasing number of qubits. This suggests that increasing the “width” of a quantum circuit is a more reliable way to improve performance than simply increasing the “depth” of a quantum circuit. However, simply adding more layers does not guarantee improvement and may introduce dataset-dependent limitations and optimization challenges. The findings have implications for the development of quantum algorithms for a variety of applications, including image recognition, natural language processing, and materials discovery. Further research is needed to explore the optimal balance between circuit depth and width for specific tasks and datasets, and to investigate the potential benefits of incorporating other quantum resources such as coherence and superposition into hybrid quantum-classical architectures.

This study demonstrated that increasing the number of qubits in a quantum-classical hybrid neural network generally improves performance, but simply adding more quantum layers does not always yield better results. This is important because it provides guidance on how to best utilize limited quantum resources, such as number of qubits and circuit depth, in near-term quantum hardware. This study reveals the relationship between quantum properties and predictive performance across multiple datasets by establishing a standardized evaluation protocol. The authors suggest that future research focus on optimizing the balance between circuit depth and width for specific applications.

👉 More information
🗞 Scaling laws for hybrid quantum neural networks: Diagnosing depth, width, and quantum centers.
🧠ArXiv: https://arxiv.org/abs/2604.06007



Source link