Vulnerabilities in AI and ML applications are on the rise

Applications of AI


The number of AI-related zero-days has tripled since November 2023, according to the latest findings from Protect AI's Hunter community of over 15,000 maintainers and security researchers.

In April 2024 alone, 48 vulnerabilities have already been discovered within widely used open source software (OSS) projects, including MLFlow, Ray, and Triton Inference Server.

This number is a 220% increase from the 15 vulnerabilities first reported in November, the report said.

Among these vulnerabilities, the most prevalent threat reported is remote code execution (RCE). This allows an attacker to execute commands or programs on a victim's computer or server without having physical access to that computer or server. An attacker can gain complete control over a compromised system, leading to unauthorized access, data breaches, system compromise, and even complete system takeover.

Protect AI’s sobering statistics highlight the accelerating scale and velocity of AI/ML zero-day issues, suggesting a growing need for stronger security measures in AI development environments.

Old vulnerabilities and new practices

From the perspective of Marcello Salvati, Senior Threat Researcher at Protect AI, the report had some interesting vulnerabilities related to AI/ML tools.

“A few standouts would probably be PyTorch Serve RCE and BentoML RCE,” Salvati says. Both allow attackers to obtain her RCE on servers running these popular projects.

Both PyTorch and BentoML are also inference servers. That is, it is designed to be exposed to users so that they can use her AI/ML models. “This factor makes these vulnerabilities very easy and valuable to attackers,” Salvati explained.

The number of basic web application vulnerabilities found in these AI/ML projects is the report's biggest surprise. “With the proliferation of web frameworks with secure coding practices and 'built-in' security guardrails, these types of vulnerabilities are no longer seen in most of his web applications these days,” Salvati said. states.

Salvati said the resurgence of these types of vulnerabilities shows that security is taking a backseat in AI/ML-related tools. That goes against every lesson we've learned over the past decade or so.

Security weaknesses with LLM tools

From a Protect AI perspective, as LLM tools grow in popularity, security-inexperienced projects are rapidly being deployed. Organizations may be forced to adopt LLM-based security projects due to competitive pressures or a desire to maintain an edge in an ever-evolving threat landscape.

However, the rapid adoption of these projects raises concerns about security maturity. In the rush to implement LLM tools, organizations can overlook important aspects of security, such as comprehensive risk assessments, robust testing protocols, and adherence to industry best practices.

As a result, you risk deploying solutions that are not sufficiently hardened against emerging threats or lack safeguards to protect sensitive data and assets. Organizations must prioritize security maturity along with innovation.

Adopt least privilege, zero trust

The adoption of AI is proceeding at breakneck (some would argue reckless) speed. Salvati said it's important for organizations and security teams to adopt standard web application security to protect themselves from rapidly expanding and maturing threats.

“The concept of least privilege applies here, as well as adopting a security model that includes zero trust,” Salvati explained. “The most important thing is to train developers and their AI engineers on safe coding practices and basic security principles.” Conduct internal security audits before introducing new AI/ML tools and libraries. And the risk is also reduced.

Given the acceleration of this AI/ML space, it is very difficult to make predictions 12-24 months ahead. “My only certainty is that by going 'full steam ahead' with the implementation of this tool, companies will be compromised more frequently,” Salvati warned.

AI weaknesses and advantages

While previous reports have shown that the adoption of GenAI by malicious actors poses new security risks to organizations, the same technology could be used defensively.

Indeed, while IT security teams grapple with new vulnerabilities introduced by the introduction of AI, the introduction of AI-based cyber tools can also help organizations struggling to respond to growing threats. there is.

Photo by engin akyrt on Unsplash

Recent articles by author



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *