Kipu Quantum demonstrates significant advances in satellite image classification through the application of quantum feature extraction, with clear accuracy improvements over leading classical methods. Kipu Quantum researchers, in collaboration with collaborators from multiple institutions including IBM and KPMG, have developed a hybrid quantum-classical approach that harnesses the power of the many-body spin Hamiltonian to enhance multiclass image classification for space applications. Leveraging the robust ResNet50 baseline, the team achieved 86.5% accuracy. This is a 2-3% improvement over traditional approaches, which reach 83% and even 84% with transfer learning. “These results highlight the practical potential of current and near-term quantum processors in high-stakes, data-driven areas such as satellite image processing and remote sensing.” This suggests broad implications for real-world machine learning tasks.
Hamiltonian-based digitized quantum feature extraction (DQFE)
This is not just a theoretical promise. The team successfully implemented and tested a hybrid quantum-classical approach on several IBM quantum processors, achieving a 2-3% improvement in absolute accuracy. The core of this progress lies in harnessing the dynamics of many-body spin Hamiltonians to generate expressive quantum features. Unlike traditional feature engineering that relies on manually designed descriptors, DQFE leverages quantum mechanics to extract complex information directly from data. The process starts with classical image feature extraction and utilizes a pre-trained ResNet-50 model to reduce high-dimensional image data to a compact tabular representation with dimensions ranging from 15 to 156. This dimensionality reduction is critical for compatibility with current quantum hardware limitations. “To ensure compatibility with current quantum hardware, input data must be mapped into a feature space with dimensions no larger than the number of available qubits,” the researchers explain, highlighting the practical considerations driving their methodology.
These reduced features parameterize the Hamiltonian, which is processed through the DQFE algorithm. DQFE employs a reverse adiabatic (CD) protocol in the impulse domain to extract features not only from the distribution of low-energy states but also from non-adiabatic transitions within the Hamiltonian. The resulting quantum-derived features are fed into classical classifiers such as gradient boosting and random forests to complete the hybrid approach. Experiments were conducted on the TreeSatAI benchmark, a real-world remote sensing dataset containing Sentinel-1 SAR data, multispectral imagery, and high-resolution aerial photography covering 15 tree genus classes, which was reduced to a challenging 5-class subset.
The researchers found that the quantum-classical method consistently boosted performance to 86.5% on IBM BOSTON hardware, even with a strong ResNet50 baseline that achieved around 84% accuracy with transfer learning. “These results demonstrate robust and reproducible quantum enhancement across multiple reduction strategies, hardware backends, and validation runs,” the research team claims, underscoring the reliability of their findings. The convolutional layer is frozen and a fully connected layer with n neurons is added, followed by an output layer with 5 neurons corresponding to the 5 tree genus classes.
TreeSatAI benchmark and multi-sensor data reduction
The burgeoning field of quantum machine learning is moving rapidly beyond theoretical expectations toward demonstrable applications, especially in data-intensive fields such as Earth observation. While fully fault-tolerant quantum computers remain the goal of the future, researchers are currently actively exploring how near-term quantum processors can enhance classical machine learning pipelines, and the TreeSatAI benchmark is an important proving ground for these advances. Kipu Quantum’s team, in collaboration with collaborators at IBM and several European universities, has focused on ways to reduce the dimensionality of this complex data and prepare it for processing on existing quantum hardware. Key aspects of their research include addressing the limitations imposed by current quantum processors, particularly the limitations of available qubits. To overcome this, the researchers considered different feature reduction strategies and projected the TreeSatAI data into 15, 120, and 156 features.
This wasn’t just about shrinking the dataset. It was to use a pre-trained ResNet-50 model as a feature extractor with newly added dense layers to strategically choose what information to keep. “The convolutional layers are frozen and a fully connected layer with n neurons is added, followed by an output layer with five neurons corresponding to the five tree species classes,” explains the methodology, highlighting the careful balance between data compression and information preservation. This approach allowed us to target a variety of quantum hardware backends, including IBM AER (simulator), IBM BOSTON and IBM PITTSBURGH (Heron r3), and IBM KINGSTON (Heron r2), demonstrating the adaptability of our technology. The core of their innovation lies in a Hamiltonian-based quantum feature extraction method called digitized quantum feature extraction (DQFE). This process encodes the reduced feature vectors into a quantum circuit and utilizes an inverse adiabatic evolution protocol to extract meaningful patterns.
The team consistently observed that combining classical features with features generated by DQFE improved classification performance compared to a purely classical pipeline. Specifically, with 120 feature reduction and transfer learning, the traditional baseline achieved an accuracy of approximately 84 percent, but when processed with DQFE on IBM BOSTON hardware, this accuracy increased to approximately 86.5 percent.
The DQFE workflow thus represents a viable approach to leveraging current and near-future quantum devices to enhance classical machine learning pipelines and establish a pathway to demonstrate practical quantum utility beyond purely theoretical capabilities in high-impact commercial domains.
ResNet50 baseline accuracy and transfer learning results
Quantum-enhanced classification: Refining a classical baseline Kipu Quantum, in collaboration with researchers at IBM and several European universities, is pushing the boundaries of satellite image classification by integrating quantum computing with established machine learning techniques. Their research focuses on enhancing classical algorithms, and in particular utilizes the ResNet50 architecture as an important basis for comparison and improvement. While achieving an initial classical accuracy of 83% on the ResNet50 baseline on the TreeSatAI benchmark dataset (a complex remote sensing collection including Sentinel-1 SAR, multispectral imagery, and aerial photography), the team sought to demonstrate the potential of quantum feature extraction to exceed this established performance. The researchers weren’t just looking for a small profit. They systematically investigated how dimensionality reduction affects both classical and quantum performance. Beyond the initial 83% accuracy, we found that the transfer learning approach can boost the ResNet50 baseline up to 84%.
However, the real progress came from the application of their quantum-classical method, which achieved an accuracy of 86.5% and demonstrated a clear and reproducible improvement over the robust classical approach. This is not a one-time result; we observed a consistent trend across multiple hardware platforms, including IBM’s AER simulator and processors from Boston, Pittsburgh, and Kingston. In particular, the reduction of 120 features proved to be particularly effective, reaching around 84% accuracy on the classic ResNet50 model using transfer learning.
In particular, in the majority of evaluated scenarios, the hybrid classical-quantum approach yields the best overall performance.
Quantum Classical Pipeline for Enhanced Classification
The marriage of quantum computing and classical machine learning is beginning to bring tangible benefits to complex data analysis, especially in demanding fields such as satellite image classification. This research points the way to moving beyond theoretical possibilities and leveraging near-term quantum processors for practical, high-stakes applications. At the heart of this progress is a technology called digitized quantum feature extraction (DQFE) that transforms classical data into a quantum representation suitable for processing. The team’s methodology relies on enhancements to existing classical pipelines, rather than relying on entirely new quantum algorithms. “Our quantum-enhanced image classification pipeline consists of three main stages,” the researchers explain, detailing the process of classical feature extraction, quantum feature generation with DQFE, and subsequent classical classification. This enables gradual integration of quantum capabilities without requiring a complete overhaul of established machine learning infrastructure.
This reduction is critical because current quantum hardware has a limited number of qubits available to represent data. Experiments conducted on IBM’s quantum processors, including BOSTON, PITTSBURGH, and KINGSTON systems, have consistently shown improved accuracy. Using the TreeSatAI benchmark, a real-world remote sensing dataset, the team achieved a traditional maximum accuracy of 83%, which can be improved to 84% using a transfer learning approach. However, by integrating DQFE’s quantum-derived features into IBM BOSTON hardware, accuracy increased to approximately 86.5 percent. This shows a reproducible improvement of 2-3% across multiple configuration and validation runs. This isn’t just about achieving higher numbers. It’s about demonstrating the feasibility of quantum machine learning using existing technology. The researchers said, “These results demonstrate that quantum feature extraction can provide value even in today’s noisy, short-term devices.” The ability to improve classification accuracy in areas such as land use monitoring, environmental modeling, and climate resilience highlights the potential of quantum technologies to address critical global challenges.
This study demonstrates that quantum feature extraction with a DQFE workflow leads to consistent and reproducible performance improvements for multiclass image classification.
Possibilities of space applications and quantum machine learning
Although quantum computing often conjures up images of futuristic, fault-tolerant machines, practical applications are emerging with surprisingly limited hardware, especially in the demanding field of space-based data analysis. Contrary to expectations that quantum supremacy is still a long way off, researchers at Kipu Quantum, IBM, and several European universities have demonstrated clear performance improvements in satellite image classification using near-term quantum processors. This is not about completely replacing classical systems, but rather augmenting them with quantum-derived features to derive greater precision from existing data. Recognizing the limitations of current quantum hardware, they strategically reduced the dataset to a difficult five-class subset and considered various dimensionality reduction techniques that projected the data into 15, 120, and 156 features. These capabilities are not just connected to a quantum computer. Instead, it is combined with traditional processing. “Among quantum technologies, quantum machine learning (QML) can build expressive data representations, making it well-suited for space applications,” the researchers said.
Importantly, this approach consistently improves absolute accuracy by 2-3% across multiple reduction strategies and hardware platforms. These results suggest that quantum feature extraction can provide tangible value, even for noisy intermediate-scale quantum (NISQ) devices, with exciting potential for operational space applications ranging from land use monitoring to climate resilience.
