Learn more about the IQT framework that enables machine learning without exposing raw input

Machine Learning


Integrated Quantum Technologies announced VEIL, a new machine learning framework designed to leverage sensitive data without exposing the raw information itself. A 25-page technical document published on arXiv details an architecture called Vector Encoded Information Layer (VEIL) that transforms data into an anonymized representation before it leaves a secure environment. This approach aims to overcome the limitations of existing privacy protection methods such as homomorphic encryption and differential privacy, which can hinder performance and scalability. According to Jeremy Samuelson, vice president of AI and innovation at IQT, VEIL is designed to “maintain the usefulness of predictions by explicitly coordinating representation learning with downstream goals.” The theoretical underpinnings of this framework are supported by research supported by Dr. Mohammad Taievi, Assistant Professor of Professional Practice at Simon Fraser University, which suggests that encoded data is structurally irreversible, protecting sensitive input during both model training and inference.

VEIL architecture enables lossy data encoding

This development addresses important challenges in privacy-preserving machine learning. Existing methods often suffer from performance shortcomings and scalability issues, limitations that the VEIL architecture seeks to overcome. The core of VEIL lies in information compression anonymization (ICA), a process that transforms raw input data into a low-dimensional latent representation within a secure environment. Only these anonymized representations are used for model training and inference, preventing the original sensitive data from being exposed. Published research has shown that these encoded representations are “structurally irreversible,” meaning that it is virtually impossible to reconstruct the original data from the encoded output. This framework establishes clear boundaries between data sources, training, and inference environments to ensure sensitive information is preserved.

Information compression anonymization for supervised machine learning

Integrated Quantum Technologies proposes an approach to protect data during machine learning processes centered around information compression anonymization (ICA). This technique, detailed in a recently published white paper and available on arXiv, does more than simply obscure the data. It transforms it into a format that is inherently difficult to reconstruct, even using large amounts of computational resources. This is achieved through architectural and informational constraints designed to preserve the usefulness of the data for machine learning tasks, rather than through encryption techniques or the addition of random noise. The paper, titled “Information Compression Anonymization: Nondegradative Sensitive Input Protection for Privacy-Preserving Supervised Machine Learning,” outlines how dimensionality reduction and increased uncertainty for potential attackers work together to minimize reconstruction risk. The 25-page document with 17 diagrams details the architecture and mathematical foundations of VEIL, providing a pathway to exploit sensitive data without exposing it outside of a trusted environment.

The research included in this paper investigates the limitations associated with existing privacy-preserving machine learning approaches, including techniques such as homomorphic encryption and differential privacy, which can incur computational overhead, increased latency, and degraded predictive performance depending on the implementation.

IQT’s VEIL framework maintains performance without sacrificing privacy

Jeremy Samuelson, vice president of AI and innovation at Integrated Quantum Technologies (IQT), recently published a white paper detailing VEIL, a machine learning framework designed to address growing concerns around data privacy without sacrificing predictive power. The study, currently available on arXiv, introduces information compression anonymization (ICA) as a core component, building data protection directly into the model’s architecture. Samuelson explains that VEIL aims to maintain and potentially improve the usefulness of predictions by tailoring representation learning to specific downstream objectives, unlike methods that rely on cryptographic computations or random noise. IQT’s long-term strategy is focused on building resilient AI systems, and VEIL serves as the first commercial product designed to protect sensitive AI data and workflows.

Unlike privacy methods that rely on cryptographic computations or stochastic noise injection, VEILTM is designed to maintain predictive utility by explicitly coordinating representation learning with downstream objectives, the paper claims.



Source link