Thales launches Imperva AI Application Security

Applications of AI


AI has fundamentally changed the threat landscape.

Today, 67% of organizations worldwide have adopted or built internal LLM and GenAI applications. Just under 70% of respondents to the Thales 2025 Data Threat Report believe this rapidly changing ecosystem is the GenAI security risk they are most concerned about. According to the 2025 Imperva Bad Bot Report, automated traffic now accounts for 51% of all web activity.

However, so far, the way we secure our applications has not changed. We still rely on traditional tools that cannot protect against the unique threats and vulnerabilities of AI. That's why Thales developed Imperva AI Application Security.

LLM introduces unique vulnerabilities to enterprise application environments. Traditional security tools such as WAF, endpoint protection, and network security cannot understand or protect against threats specific to LLM logic and interfaces. Here's why:

Traditional security focuses on blocking known malicious code and network anomalies. However, LLM threats often leverage the intended functionality (also known as logic) of the model to perform malicious actions. This makes detection of these threats extremely difficult.

Many of these threats are included in the OWASP Top 10 for LLM. for example:

  • Immediate injection: This threat involves using carefully crafted prompts to manipulate LLM to behave in ways it is not supposed to. For example, they may ignore system instructions, disclose internal data, or take other unintended actions. Because this is a natural language inputtraditional filters often treat this as harmless.
  • Sensitive data leaked: LLMs are trained and processed on vast amounts of data, including potentially sensitive user information and company-proprietary information. An attacker could use prompt injection or exploit flaws in the application to trick the model into outputting this sensitive data.
  • System prompt leak: System prompts are the core instructions and operational logic of LLM. These are unique and important to the functioning of the LLM. If an attacker were able to obtain these prompts, they could potentially use them to further exploit or circumvent the model's defenses.

Traditional web application firewalls are great at blocking malicious input. This means you can detect and prevent threats like SQL injection and cross-site scripting. However, it lacks the necessary context and logic to analyze the output of LLM.

As a result, these traditional security tools alone cannot determine whether LLM output is harmful, unsafe, noncompliant, or exposes sensitive data. This leaves organizations susceptible to: Improper output handling.

Additionally, because LLM performs computations, an attacker can overwhelm it with resource-intensive queries. This is known as endless consumption. This can lead to costly slowdowns, denial of service (DoS) issues, or significantly higher operational costs. The only defense traditional tools have against this type of attack is basic rate limiting, but that alone is ineffective.

The takeaway here is that AI-specific risks require AI-specific mitigations. That's exactly what Imperva AI Application Security provides.

Imperva AI Application Security is an enterprise-grade security solution specifically designed to protect GenAI and LLM applications. Provides specifically designed runtime protection between enterprise applications and the LLMs on which they run.

Think of Imperva AI Application Security as an intelligent shield. Analyze all inputs and outputs in real time to detect and stop malicious activity. Something that protects the unique behavior and output of a GenAI application. without impacting application performance.

The main features are:

  • Immediate injection protection: Block malicious or manipulative prompts before they reach your model.
  • Protecting sensitive data: Detect and block exposure of sensitive and proprietary information such as personally identifiable information (PII), financial data, and application programming interface (API) keys.
  • System prompt leak prevention: Prevent attackers from accessing internal instructions and operational logic.
  • Improper output handling: Filter harmful, unsafe, or non-compliant AI output before it reaches end users.
  • Unlimited consumption reduction: Prevent rogue or resource-intensive AI tasks that can cause slowdowns, outages, and high operational costs.

The solution is flexible and environment-agnostic, so it can be seamlessly deployed and integrated into existing environments. Ultimately, it provides a level of protection that cannot be achieved with traditional tools.

No, you still need WAF, endpoint protection, and network security tools. Imperva AI Application Security solves a very specific problem: a critical gap in the AI ​​interface of applications where traditional tools cannot detect or prevent threats. This allows organizations to:

  • Deploy AI-driven applications with confidence. Protect your users and your business from the risks inherent in AI.
  • Prevent incidents that are costly and lead to reputational damage. Stop prompt injections, data leaks, and model manipulation before they impact your business.
  • Accelerate innovation while maintaining compliance. Enable secure deployment of AI at scale without disrupting workflows or regulatory obligations.
  • Focus on business growth: By ensuring unique LLM threats are managed, teams can innovate and scale without adding security concerns.

Imperva AI Application Security is a key component of Thales' broader security vision. It is part of the Thales AI Runtime Security suite of solutions for AI protection known as Thales AI Security Fabric, which also includes RAG Data Protection and additional capabilities launching in 2026.

Thales is uniquely positioned to protect the entire GenAI lifecycle, protecting every layer of the AI ​​system from user to application, application to LLM, and the underlying data store.

Thales AI Security Fabric enables organizations to:

  • Enabling AI to grow your business: Maximize the value of AI and enable your team to innovate and scale without adding security risks.
  • Prevent incidents that are costly and lead to reputational damage. Reduce the risk of instant injections, data leaks, and model manipulation before they impact your business.
  • Accelerate innovation while maintaining compliance. Give Agentic AI and Gen AI access to datasets while protecting sensitive and regulated data.

Thales' Imperva AI application security product is expected to be generally available in mid-2026. Until then, please consider partnering with us moving forward. Or, why not consider other aspects of best-in-class Imperva application security?



Source link