Classical training enables 160-qubit quantum generative models

Machine Learning


Although generative modeling represents a powerful application of computation, training these models often poses significant challenges, especially when estimating gradients accurately. Bence Bakó, Zoltán Kolarovszki, Zoltán Zimborás and colleagues, along with colleagues, are demonstrating a new approach that circumvents these difficulties by efficiently computing expectations on classical computers, enabling fully classical training. Their work introduced fermionic bone machines, generative models that employ specially structured quantum states and transformations to enable this efficient training process. Importantly, although training is done classically, sampling from the resulting model still requires a quantum device, potentially unlocking computational advantages, and numerical experiments on systems simulating up to 160 qubits confirm the effectiveness of this new framework.

Avoiding the barren plateau in quantum machine learning

Researchers have developed a new method for training quantum circuits that specifically addresses the common problem of sterile plateaus that prevent learning in deeper circuits. This work focuses on utilizing Floquet Layered Operator (FLO) circuits and “magic input states” to efficiently estimate a metric called maximum mean discrepancy (MMD2) that provides initial variation, prevents gradients from vanishing, and guides the training process. These magic states allow for greater flexibility during training and allow the circuit to learn more effectively. This approach focuses on using Floquet theory to understand and control the behavior of FLO circuits over time, and utilizes transformations from the SO(2d) group to control the evolution of the qubit.

Initializing circuit parameters using Haar measurements ensures a uniform probability distribution and a random starting point for optimization. Careful selection of the MMD2 loss function accurately measures how well the output of a quantum circuit matches the desired target distribution, enabling efficient training and avoiding the limitations of traditional quantum machine learning approaches. The algorithm includes preparing the magic input state, applying a FLO circuit with tunable parameters, measuring the expected value of the Z operator using parity measurements, and computing the MMD2 loss function. A classical optimization algorithm then updates circuit parameters to minimize losses, demonstrating efficient training and scalability with the number of qubits and dataset size. This research represents an important step towards overcoming major challenges in quantum machine learning and paves the way for deeper and more complex models.

Fermion-born machines and Gaussian decomposition

Scientists have introduced fermionic bone machines (FBM), a new approach to quantum generative modeling that enables classical training while preserving the potential for quantum speed-up during sampling. This research focuses on utilizing parameterized “magic states” and fermion linear optics (FLO) transformations to efficiently optimize model parameters. The key innovation is to decompose these magic states into Gaussian operators, allowing for rapid estimation of the expected values ​​needed for training. The training process relies on minimizing the loss function, specifically the maximum mean squared difference (MMD2). MMD2 is reformulated to rely on calculating the expected value of Pauli-Z strings. This reformulation avoids the need for direct sampling from the probability distribution and significantly reduces computational demands during training.

The researchers demonstrated that for FBM, the expected values ​​required for training can be computed in polynomial time with guaranteed error bounds, allowing scalable training. The team validated the approach by implementing the FBM and training framework on systems containing up to 160 qubits, confirming the effectiveness of the model and demonstrating its scalability. They also developed an explicit simulation algorithm that outperforms existing methods such as Heisenberg evolution for this particular setting, bringing the potential of quantum generative modeling closer to reality.

Fermion-born machine demonstrates quantum superiority

Researchers have achieved a breakthrough in quantum generative modeling with the development of fermionic bone machines (FBMs), demonstrating a classically trainable system capable of efficient sampling on quantum hardware. This work addresses key challenges in training generative models, particularly the difficulty in estimating gradients, by exploiting the unique properties of fermion linear optical (FLO) circuits. The researchers demonstrate that when using specially designed input states, the FBM’s output probability distribution is difficult to sample classically under reasonable complexity assumptions, suggesting a potential quantum advantage. Importantly, the researchers demonstrated that the expectation value of a constant-length Pauli-Z string can be computed in polynomial time. This is a significant advancement that allows for efficient classical training.

This method allows for rapid estimation of expectations and optimization of model parameters by decomposing magic states into Gaussian operators. This efficient computation is a key element for scaling quantum production models to larger systems. Experiments confirm that linear depth quantum circuits can be used to efficiently sample trained FBMs. This is a key requirement for practical implementation in quantum computers in the near future. Numerical studies on systems with more than 100 qubits demonstrated the scalability of the framework, and the team successfully trained circuits with up to 160 qubits. The researchers demonstrated that by employing parameterized quantum states and transformations, these models can represent complex probability distributions that challenge classical sampling methods. The main result is the development of a classical algorithm that efficiently computes the expectation value, which facilitates the training process and performs better than existing methods for this particular problem. Numerical experiments performed on up to 160 qubit systems using datasets derived from molecular fingerprints and gene sequences demonstrate the effectiveness of the model and training framework.

The research team observed that training with low-order correlations resulted in good performance and that overparameterization of quantum circuits could be beneficial, providing valuable insights into the optimal configuration and training strategy for FBMs. The authors emphasize the importance of choosing a target problem that can be expressed in terms of structured probability distributions, especially local correlations, in order to have the best chance of demonstrating the benefits of quantum in machine learning. Future work may focus on developing approximation algorithms to extend the practical simulation potential of these models to larger systems and higher order correlations, leveraging the techniques of the IQP framework. Further investigation of the information diffusion properties within quantum circuits may provide valuable insights.

👉 More information
🗞 Fermion Born Machine: Classical training of quantum generative models based on fermion sampling
🧠ArXiv: https://arxiv.org/abs/2511.13844



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *