Hybrid quantum-assisted machine learning improves error-correcting codes in digital quantum systems

Machine Learning


Quantum error correction remains a major hurdle in the development of practical quantum computers, as maintaining the integrity of quantum information is extremely difficult. Yariv Yanay of the University of Maryland and colleagues are demonstrating a new approach to designing better error-correcting codes by integrating classical and quantum computing. Their work builds on quantum Lego formalism, a method of building code from basic building blocks, and utilizes reinforcement learning to automate the code generation process. This work is particularly noteworthy because it takes advantage of commercially available quantum devices to search for codes tailored to specific hardware limitations and induced errors, making it an important step toward fault-tolerant quantum computing. By combining the strengths of both classical and quantum algorithms, the team hopes to accelerate the discovery of robust and efficient error correction strategies.

Their work builds on quantum Lego formalism, a method of building code from basic building blocks, and utilizes reinforcement learning to automate the code generation process. This work is particularly noteworthy because it takes advantage of commercially available quantum devices to search for codes tailored to specific hardware limitations and induced errors, making it an important step toward fault-tolerant quantum computing.

The building blocks of digital quantum computation require robust error correction, and quantum Lego formalism provides a systematic way to construct new stabilizer codes from basic building blocks. Previous work has demonstrated the use of this formalism to generate improved error-correcting codes through an automated reinforcement learning process. This work extends that work by introducing the application of hybrid classical quantum algorithms, further advancing the field of quantum error correction and scalable quantum computing.

Reinforcement learning for hardware-enabled code detection

This work pioneered a hybrid classical-quantum algorithm for discovering quantum error-correcting codes tailored to specific hardware limitations. Building on the Quantum Lego framework, which represents code as tensors and allows for systematic construction through concatenation, the researchers went beyond purely classical approaches to code evaluation. Their work leverages reinforcement learning and frames code construction as a game in which agents iteratively build code by adding and shrinking tensor building blocks, aiming to maximize a defined reward.

This process allows the algorithm to efficiently search the vast space of possible quantum error-correcting codes. Importantly, the team innovated by integrating calls to commercially available quantum devices, Quantinuum’s trapped ion processor, and IBM’s superconducting chip to evaluate reward functions. Previously, the utility of code was assessed classically, i.e. as a computationally intensive task. Quantum devices now directly evaluate a code’s response to both native device noise and intentionally induced photon losses, providing a natural and efficient measure of performance. The experimental setup included preparing the qubit in a logic state, encoding the Pauli operator, and benchmarking the resulting circuit performance with stabilizer measurements.

This study adopted QL formalism and identified a tensor representing the encoding map of QECC. Here, logical and physical qubits are related through tensor contraction. These tensors act as “Lego” blocks, being combined to create larger code, inheriting symmetries, and adhering to certain constraints. The team’s reinforcement learning routines use this framework to learn new stabilizer codes, prioritizing outcomes such as codes that protect against biased noise through carefully designed reward functions. This approach avoids the exponential complexity of classically calculating code distance and logical error rate. To quantify performance, this study details the steps to calculate uncorrected error rates directly on quantum hardware.

Starting from an initial logical qubit state, the algorithm encodes the Pauli operator and measures the resulting stabilizer to determine the error rate. Combining this direct quantum measurement of error rate with classical reinforcement learning loops represents a significant methodological advance, enabling the search for effective error correction even in devices operating below traditional error correction thresholds. This technique reveals the possibility of optimizing QECC for specific quantum platforms, paving the way to more robust and scalable quantum computing.

Optimal Pauli correction using hybrid algorithm

Scientists have achieved significant progress in quantum error correction by implementing a hybrid classical quantum algorithm that leverages both reinforcement learning and commercial quantum devices. The research team systematically constructed new stabilizer codes using Quantum Lego formalism, automating the process of producing improved error-correcting codes tailored to specific device characteristics and induced loss errors. This work has not been corrected. within However, we instead focus on identifying optimal logic Pauli corrections to maximize the recovery of the initial logic state from the final potentially corrupted state.

The experiment involved collecting a data set consisting of an initial logical state, a measured syndrome, and a final logical state, and assigning a logical Pauli correction to each syndrome to return the maximum number of runs to the correct initial state. The team quantified performance using pND, the ratio of uncorrected errors to total runs, and aimed to minimize this value through the learning process. For example, analysis of a simple two-qubit code revealed specific output distributions for different initial states, with yields for |+X⟩ of 27.3% for 00, 28.0% for 01, 1.8% for 10, and 0.8% for 11, while |+Y⟩ and |+Z⟩ showed different distributions. These measurements were critical in determining the effectiveness of the chosen correction strategy.

Results show that the learning process, initially tested in the STIM simulator, produced optimal codes for both isotropic noise (Pr(X) = Pr(Y) = Pr(Z) = 0.01) and biased noise (Pr(X) = Pr(Y) = 0.01, Pr(Z) = 0.05), reducing qubit error by 97% and 85%, respectively. Further implementation on Quantinuum’s H1-1 trapped ion system and IBM’s superconducting heavy hex system enabled evaluation on real quantum hardware. The researchers observed that the optimal code consistently converged to a two-qubit solution because the natural error rate exceeded the error correction threshold in these devices. To further improve the process, a code distance minimization stage was added to the classic learner on the IBM machine to prioritize stabilizers with the shortest qubit-to-qubit distances.

By employing distorted reward functions to simulate higher-fidelity quantum computers, scientists were able to demonstrate the potential of this hybrid approach to generate increasingly robust and effective error-correcting codes, paving the way to more reliable quantum computation. This study confirms the feasibility of automating stabilizer code design by adapting to the unique constraints of a particular quantum architecture.

Automatic code detection using quantum reinforcement learning

Researchers have demonstrated a classical-quantum hybrid algorithm for discovering quantum error-correcting codes. Based on the Quantum Lego framework, which builds code from basic tensor building blocks, the team integrated classical reinforcement learning with evaluation performed on a commercial quantum computer. This approach goes beyond purely classical evaluation methods and allows an automatic search for stabilizer codes tailored to specific device characteristics and types of induced errors.

In this study, we successfully implemented a system in which a classical reinforcement learning agent designs candidate codes and a quantum computer evaluates their performance by measuring error rates. Calculating these rates is computationally demanding on classical computers, but is more easily achieved using quantum hardware, making this an important step toward practical quantum error correction. The authors acknowledge the limitations in the current scale of quantum devices and the complexity of estimating code distance, which remains a classically intensive process. Future work will focus on improving the algorithm and exploring more sophisticated reward functions to further optimize the performance of the code. They also propose investigating the potential of this hybrid approach using larger and more powerful quantum processors. This may enable the discovery of more effective error correction strategies. This research provides a promising path to developing robust quantum computing, leveraging the strengths of both classical and quantum computing paradigms.



Source link