On Wednesday, consulting giant incubator KPMG Studios launched Cranium, a startup that protects AI applications and models. Cranium’s “end-to-end AI security and trust platform” spans two areas: MLOps (machine learning operations) and cybersecurity, providing visibility into AI security and supply chain risk.
“Fundamentally, data scientists do not understand cybersecurity risks in AI. Nor do cyber professionals understand data science the way they understand other topics in technology. Similar to the gap that often exists between data scientists and cybersecurity staff, he said, there is a significant understanding gap between data scientists and cybersecurity professionals.
Cranium enables key AI lifecycle stakeholders to have a common operational view across teams, improving visibility and collaboration. The platform captures both in-development and deployed AI pipelines, including relevant assets involved in the entire AI lifecycle. Cranium quantifies your organization’s AI security risks and establishes continuous monitoring. Customers can establish an AI security framework to provide data science and security teams with a foundation for building proactive, holistic AI security programs.
To keep data and systems safe, Cranium maps AI pipelines, validates their security, and monitors for adversarial threats. The technology integrates with existing environments, allowing organizations to test, train, and deploy AI models without changing workflows. Additionally, security teams can use Cranium playbooks alongside software to protect AI systems and comply with existing US and EU regulatory standards.
With the launch of Cranium, KPMG is capitalizing on growing concerns about adversarial AI. Adversarial AI is the act of intentionally manipulating or modifying an attacked AI system to produce inaccurate or harmful results. For example, a controlled self-driving car could cause a serious accident, or an attacked facial recognition system could misidentify an individual and lead to a false arrest. These attacks can originate from a variety of sources, including malicious actors, vulnerabilities, or errors, and are used to spread disinformation, conduct cyberattacks, or commit other types of crime. There is a possibility.
Cranium isn’t the only company looking to protect AI applications from adversarial AI attacks. Competitors such as HiddenLayer and Picus are already working on tools to detect and prevent AI attacks.
Opportunity for innovation
Opportunities for entrepreneurs in this space are significant as the risks of adversarial AI are likely to increase in the coming years. There are also incentives for the major players in the AI space (OpenAI, Google, Microsoft, and possibly IBM) to focus on protecting the AI models and platforms they are creating.
Enterprises can focus their AI efforts on detection and prevention, adversarial training, explainability and transparency, or post-attack recovery. Software companies can develop tools and techniques to identify and block adversarial input, such as images and text that have been intentionally altered to mislead AI systems. Companies can also develop techniques to detect anomalous or unexpected behavior in AI systems. This could be a sign of an attack.
Another approach to protecting against adversarial AI is to “train” AI systems to withstand attacks. By exposing an AI system to adversarial examples during the training process, developers can help the system learn to recognize and defend against similar attacks in the future. Software companies can develop new algorithms and techniques for adversarial training, as well as tools to evaluate the effectiveness of these techniques.
With AI, it can be difficult to understand how the system is making decisions. This lack of transparency can make it difficult to detect and defend against adversarial attacks. Software companies can develop tools and techniques to make AI systems more explainable and transparent, so developers and users can better understand how systems are making decisions and potentially vulnerabilities can be identified.
Even with the best defenses in place, AI systems can still be compromised. In such cases, it is important to have the tools and techniques to recover from the attack and restore the system to a safe and functional state. Software companies can develop tools to identify and remove malicious code and input, as well as techniques to restore systems to a “clean” state.
However, protecting AI models can be difficult. Testing and validating the effectiveness of AI security solutions can be challenging as attackers can constantly adapt and evolve their technology. There is also the risk of unintended consequences, such as the AI security solutions themselves introducing new vulnerabilities.
Overall, while the risks of adversarial AI are significant, so are the entrepreneurial opportunities for software companies to innovate in this space. In addition to improving the safety and reliability of AI systems, protection from adversarial AI helps build trust and confidence in AI among users and stakeholders. This helps drive adoption and innovation in the field.