Quantum reservoir computing achieves classification with fewer features

Machine Learning


Researchers from CNRS, Thales, and the University of Paris-Saclay have demonstrated a new approach to quantum reservoir computing using a simple system of a single transmon coupled to a readout resonator. In this proof-of-concept implementation, classification is achieved by encoding the input data in the amplitude of a coherent drive and measuring the resulting cavity states in Fock basis. This method is different from typical quantum computing techniques. This experiment successfully classified two classical tasks with fewer measured features than required by traditional classical neural networks, suggesting the potential benefits of the hardware for quantum machine learning. The researchers note that while various implementations of quantum reservoir computing have been explored in simulations, there have been few experimental implementations to date that enable scalable and generalized quantum machine learning models.

Circuit quantum electrodynamic system for reservoir computing

Achieving this proof of concept opens new avenues for hardware-efficient quantum neural networks. Contact creator Danijela Markovic explained that their design captures a large number of nonlinear features from a single physical system and emphasizes its efficiency. Numerical simulations corroborated these findings, revealing that increasing Kerr nonlinearity in the system improves reservoir performance and suggesting avenues for future design optimization and improvement. The researchers have made their data and code openly available via Zenodo, identified at https://zenodo.org/records/15745370, to facilitate reproducibility and further investigation within the quantum machine learning community. Markovich said their work demonstrates a hardware-efficient quantum neural network implementation that can be scaled up and generalized to other quantum machine learning models.

Nonlinear feature extraction is possible using Fock criterion measurement

The research team successfully implemented a proof-of-concept quantum reservoir using just one transmon, a type of superconducting qubit coupled to a readout cavity, challenging previous beliefs that more complex architectures are essential for this type of computation. This achievement relies on a new approach to data input and feature extraction. Further analysis supported by numerical simulations revealed that the increase in Kerr nonlinearity, a characteristic of superconducting circuits, benefits reservoir performance. The experimental results published by the American Physical Society (Phys. Applied 25, 054005, published May 4, 2026) via Zenodo with the identifier https://zenodo.org/records/15745370 demonstrate a scalable implementation that can be generalized to other quantum machine learning models, potentially leading to more compact and powerful quantum neural systems. network.

Kerr nonlinearity improves reservoir performance

Researchers at CNRS Laboratoire Albert Fert demonstrate the efficiency of quantum reservoir computing using simple hardware. This achievement challenges the idea that large numbers of qubits are needed to generate enough nonlinear features for effective computation. This allowed us to classify two classical tasks while utilizing fewer measured features than typically required by classical neural networks. Further investigation revealed that enhancement of Kerr nonlinearity, a phenomenon that affects the propagation of light in certain materials, positively impacts the performance of the reservoir. Numerical simulations corroborated these experimental results and showed that increasing Kerr nonlinearity leads to improved classification accuracy.



Source link