AI Security: A Strategic Framework for Closing the AI ​​Exposure Gap

Applications of AI


As AI adoption accelerates, CISOs face the dual challenge of driving innovation while mitigating the risks of a rapidly expanding attack surface. Tenable’s five-step framework for securing AI provides a systematic approach to mitigating AI security risks as organizations race to achieve the productivity benefits of AI.

Important points

  1. Get a five-step framework to help you secure the use of AI across your organization and reduce the security risks created by AI tools.
  2. Securing the use of enterprise AI requires a combination of robust AI detection capabilities, mechanisms to protect AI workloads and the infrastructure that runs it, prompt-level visibility, the ability to analyze AI security risks alongside other risks, and technical controls to ensure compliance with an organization’s AI acceptable use policies.
  3. Learn why existing security controls are inadequate when it comes to protecting AI.

As AI transforms enterprises, security leaders like myself are grappling with how to best manage the security risks it creates.

The challenge is that AI is embedded almost everywhere throughout an organization, including employee productivity tools, SaaS platforms, developer libraries, cloud services, APIs, and web apps. result? Our teams are left with a widening AI exposure gap, a vast and largely invisible attack surface that traditional security tools are not designed to monitor.

Complicating the issue is that AI risks often cannot be isolated into a single asset. Rather, it originates from a set of interconnected elements (applications, infrastructure, identity, data, etc.) that are collectively at risk. Here’s an example of what I mean.

For example, suppose your employees are using an approved AI chatbot for technical support resolution that relies on Amazon Bedrock agents, and those agents have elevated privileges to access sensitive internal systems such as enterprise resource planning and customer resource management tools. Once a threat actor gains access to an agent through an unpatched vulnerability on an employee’s laptop, the threat actor can potentially use the agent to compromise sensitive data, putting seemingly safe use of approved AI tools at risk with high consequences.

Securing data in today’s AI-assisted work environments has become exponentially more difficult. This is because each of the countless interactions with AI assets (e.g., every prompt, file upload, generated response, integration, configuration) can put intellectual property, customer information, and sensitive plans at risk.

So how do you manage this challenging new attack surface that grows indefinitely as your organization expands its use of AI? This is a strategic framework I’ve implemented to manage, discover, and protect AI as it emerges and poses risks to your organization.

Strategic Framework: 5 Steps to Securing Enterprise AI

1. Establish an AI governance committee, framework, and acceptable use policy

Securing AI starts with setting clear expectations with employees about acceptable use. Establish acceptable use policies for AI, including:

  • Provides a list of approved and unapproved AI tools.
  • Define good and bad business use cases.
  • Describes the types of data that can and cannot be shared with LLM.
  • It defines the rules for data processing.
  • Deal with copyright law. and
  • Describes the consequences of policy violations.

Based on your organization’s AI acceptable use policy, you can implement controls to enforce and monitor compliance with that policy.

2. Detect AI across the attack surface

When I talk to other CISOs about securing AI, they say that discovering and detecting AI is one of their biggest challenges. I understand that. AI is everywhere, and much of it is very difficult to find. One reason for this is that AI’s presence extends far beyond clearly visible central control systems.

As security leaders, we need to consider:

  • AI assets, agents, plugins, browser extensions, and workloads.
    • Run in the cloud or on-premises
    • Accessible internally or externally
    • approved or unapproved
  • Introducing the forgotten AI test
  • AI tools embedded in endpoints and applications
  • All AI software, libraries, models, and services
  • AI services, large-scale language model (LLM) APIs, and AI chatbots exposed on endpoints and cloud applications.

Existing data loss prevention (DLP), cloud access security broker (CASB), and cloud security posture management (CSPM) solutions are good starting points for discovering AI assets. However, the non-deterministic nature of AI works against traditional rule-based security protections, and holistic detection requires specialized detection tools. They also need unique discovery capabilities to identify embedded AI tools and libraries and understand how AI systems work together to put them at risk.

With continuous and complete visibility into your enterprise’s AI usage, you can understand exactly what workloads and infrastructure need to be protected, start assessing AI exposure across your organization, and prioritize specific remediation actions accordingly.

3. Protect AI workloads and agents

Because AI workloads are deeply interconnected and often prone to critical misconfigurations and over-permissions, this step includes proactively securing the infrastructure on which AI runs and hardening AI workloads before attackers can exploit them. For example, if your organization’s developers are building AI-enabled applications in the cloud, you need to ensure that your cloud infrastructure is secure.

Effective protection requires the following features:

  • Identify misconfigurations and risky configurations in cloud-based AI workloads.
  • Detect vulnerabilities that can expose your models, agents, data, or APIs to unauthorized access.
  • Implement identity-driven exposure reduction because AI relies heavily on non-human identities.
  • Detect over-privileged service accounts, roles, and machine IDs used by AI workflows and strictly enforce least-privileged access.
  • Understand potential attack vectors from AI assets and workloads that can impact business-critical systems or lead to sensitive data.
  • Quickly isolate unstable or compromised AI agents in a controlled environment to minimize the impact of a potential breach.

I’ll be discussing this specific topic of AI workload and agent protection in more detail in a follow-up blog that I have planned. In the meantime, a detailed risk analysis of your AI stack can help you understand how identity weaknesses and infrastructure flaws can combine to expose you to significant risks. Based on these insights, you can provide security teams with actionable playbooks to harden their environments and ensure that services run on a secure, resilient, and validated architecture.

4. Evaluate AI usage and interactions

This step includes understanding how employees will interact with generative AI tools and autonomous agents and ensuring that they are not violating the organization’s AI acceptable use policy. It is important to understand how data flows through all AI applications and determine where exposure is occurring.

This requires detailed visibility into:

  • Who is using AI?
  • for what purpose
  • Where dangerous activities or misuse occur
  • Data employees are sharing through prompts, uploads, and automated actions
  • Attempts to jailbreak approved AI tools and provide malicious prompts

With immediate visibility into employee AI usage, security teams can detect policy violations and enforce safe AI operation. Security teams can also identify sensitive data, including intellectual property and PII, that employees and agents share with AI tools through prompts, uploads, and automated interactions that could be exposed through accidental disclosure. It also enables security teams to detect and respond to new AI-specific threats and exploits, such as prompt injection attempts and other malicious instructions designed to manipulate AI systems.

Whether you discover a malicious tool connected to the Microsoft Copilot agent or an employee misuses AI in inappropriate situations for which the tool was not intended (such as an internal hiring decision), you need to react quickly to address the exposure and enforce safe use.

5. Analyze AI security risks in the context of other risks

Mitigating AI security risks requires more than isolated detection of unpatched vulnerabilities in AI software, weak configurations in AI systems, and overprivileged agents. After all, AI is becoming fully integrated into all apps, data, and business processes.

Mitigating AI security risks requires an integrated, automated approach to collect context-rich AI security data and analyze it in conjunction with other exposed data such as exposed S3 buckets, vulnerable laptops, and orphaned accounts with administrative privileges. At Tenable, we call this approach exposure management, and we see the industry rapidly adopting it. Leak management allows you to proactively see how security weaknesses across your environment combine to create a breach, a high-risk attack path that leads to your organization’s most sensitive systems and data.

Exposure management also surfaces risks based on precise context, including specific AI engines, users, and sessions, enabling high-fidelity issue management and rapid response. It’s about understanding how a combination of harmful risks combine to create business risk. There is one medium-severity misconfiguration in Amazon Bedrock that can connect to an unsecured LLM and provide overprovisioned entitlement access to agents. Exposure management requires a thorough understanding of your entire environment and attack surface.

Securing the future of AI innovation

The rapid integration of AI across the enterprise has created a complex, interconnected attack surface that cannot be addressed by traditional security controls. To close the AI ​​exposure gap, security leaders must move from a reactive, tool-centric approach to a proactive integration strategy.

By implementing this five-step framework, you can build a resilient security posture that evolves with AI technology. After all, effective exposure management isn’t about slowing down innovation. It’s about providing the guardrails needed to ensure organizations can safely and confidently harness the power of AI.



Source link