Researchers have demonstrated a new approach to generative modeling using quantum computing, which may provide a path to near-term practical application. Marcin Płodzień of Kilimanjaro Quantum Technology proposes a “scrambling bones” machine in which entanglement from a fixed unitary acts as a scrambling reservoir and optimization focuses only on the rotation of a single qubit. The study, conducted in collaboration with Kilimanjaro Quantum Technology researchers, revealed that once enough entanglement is generated, the model accurately represents the probability distribution of the target, with minimal dependence on the specific scrambling mechanism used. Moreover, structuring the task as a variational Hamiltonian problem yields performance comparable to classical generative models, representing an important step toward quantum-enhanced generative modeling and providing a new paradigm for exploiting quantum resources.
The success rate of over 92% demonstrates a powerful new way to use quantum entanglement for complex computations. This machine learning approach mimics the way probabilities emerge in quantum mechanics and offers a potential shortcut to solving difficult problems. Current quantum production models inspired by expressive quantum circuits and classical methods, with the practical limitations of short-term quantum hardware, often suffer from a “barren plateau.” At the heart of this effort is the concept of quantum scrambling, a process by which quantum information spreads rapidly throughout a many-body system. Rather than trying to train an entire quantum circuit, scientists are using a fixed “reservoir of scrambling” to create multi-qubit entanglements.
Only the rotation of a single qubit is then optimized, significantly reducing the number of trainable parameters and reducing the risk of a barren plateau. We investigate the analog time evolution under three different types of entangled unitaries: the theoretically ideal Haar random unitary, the physically realizable finite-depth bricklaying random circuit, and the nearest-neighbor spin-chain Hamiltonian.
First, if the scrambling unit produces an entanglement that resembles a maximally random state, the model effectively learns the probability distribution of the target with minimal dependence on the particular scrambling mechanism used. This adaptation enables comparable performance to established classical generative models, such as generative adversarial networks, variational autoencoders, and restricted Boltzmann machines, when considering a comparable number of parameters.
Here, scientists are now focused on understanding how tracing auxiliary qubits can further increase the expressive power of the model. This is to investigate the distribution of more complex mixed states. Moreover, this architecture appears suitable for implementation in existing and near-term quantum devices, where multi-qubit gate reconditioning would have significant overhead. Ultimately, this effort aims to demonstrate a practical path to harnessing quantum resources for generative modeling. For the one-dimensional benchmark, a multimodal distribution with five Gaussian peaks was used and discretized into 2n bins. where n represents the number of measurement system qubits after accounting for adjuncts.
The centers of the peaks were uniformly located and the weights were randomly drawn from a uniform distribution. The total number of trainable parameters is linearly proportional to both the number of layers L and the number of qubits N, specifically 3LN. When auxiliary qubits are included, the resulting reduced density matrix follows a fixed-trace Wishart distribution. For representing distributions beyond a single pure state.
When the number of parameters is matched, comparisons with representative classical generative models, generative adversarial networks (GANs), variational autoencoders (VAEs), and restricted Boltzmann machines (RBMs) show that QSBM is competitive in the domain of small numbers of trainable parameters. Two physically feasible strategies approximate Haar-level scrambling. It is a random quantum circuit with a bricklaying layout and an analog time evolution under a many-body Hamiltonian.
Already at the level of t = 2, coarse entanglement properties such as subsystem purity and Renyi-2 entropy become indistinguishable from the Haar ensemble. When considering two-dimensional benchmarks, the qubit was divided into two registers, each mapped to a grid. A mixture of a bivariate Gaussian distribution with adjustable correlation and four isotropic Gaussian distributions was studied. Meanwhile, the Shannon entropy of the target distribution remains constant during training, simplifying the optimization process.
Born rule generation modeling with optimized single-qubit rotation and diverse entangled unitary
At the same time, this work supported a generative modeling approach that utilizes Born rules to define probability distributions from parameterized quantum states. Here, a “scrambling bone” model was built, employing a fixed entanglement unitary as a scrambling reservoir to generate fixed qubit entanglement, and optimizing only the rotation of a single qubit to reduce the number of learnable parameters.
Three different entanglement unitaries were investigated. One is the Haar random unitary, which represents an ideal scrambling, and the other is an analog time evolution dominated by two physically plausible approximations: a finite-depth bricklay random circuit and a nearest-neighbor spin-chain Hamiltonian. By evaluating the impact of the microscopic details of the scrambling reservoir, which were found to be significant, the performance was benchmarked against target distributions of different system sizes.
Once the entangler achieves near-Haar-type entanglement, the model demonstrated its ability to accurately represent the target distribution while limiting its sensitivity to the particular scrambling mechanism used. By making the Hamiltonian coupling a trainable parameter, the generation task was transformed into a variational Hamiltonian problem, allowing optimization of the entire system.
To validate the quantum approach, comparisons with established classical generative models were required. The performance was evaluated against representative classical models, generative adversarial networks (GANs) and variational autoencoders (VAEs). Restricted Boltzmann Machine (RBM) with matched number of parameters. On the other hand, GAN utilized the Adam optimizer with specific parameters and MLP with a total of 302 parameters.
At the same time, VAE also adopted the Adam optimizer with different parameters in parallel to the MLP with 306 parameters. Finally, the RBM was trained using stochastic gradient descent (SGD) with specific parameters to incorporate 102 hidden layers. There are a total of 308 parameters. This effort addresses a long-standing challenge in the field: creating powerful and practical generative models. Given the limitations of current quantum hardware. For many years, building quantum systems that outperform classical systems has required ever-increasing numbers of qubits and coherence times.
These researchers circumvent this challenge by focusing on a “scramble” approach in which entanglements are established in advance and refined through optimization. The real strength of this method lies in its relative insensitivity to the exact way in which the initial tangles are created. By demonstrating performance on a variety of entangled unitaries, the team proposes a path to generative modeling even for imperfect quantum devices.
Comparisons with classical generative models are made with a comparable number of parameters. While this is a sensible benchmark, it does not capture the full potential of quantum speedups in training and sampling. Important questions remain regarding scalability. Although the results hold true for modest system sizes, extending this approach to distributions that require more qubits will certainly introduce new hurdles.
Moreover, the variational Hamiltonian formulation connects generative modeling to the broader field of quantum optimization, opening up exciting possibilities. Unlike many quantum algorithms that require fault-tolerant machines, this technique could find early application to noisy, medium-sized quantum devices in the near term. We provide a concrete path to quantum-enhanced machine learning. Currently we are focused on generating distributions. However, future research may consider using these models for tasks such as data compression and simulation of complex physical systems.
