Cisco-recommended security measures (open source scanning, vulnerability testing, application firewalls, and AI-focused data loss prevention) address various aspects of this risk and form the foundation of a comprehensive AI security strategy.
As AI adoption continues to expand across the Middle East across sectors such as government, financial services, energy, and critical infrastructure, organizations face increasing pressure to secure AI applications throughout their lifecycle. From the data used to train models to the deployment of the models themselves, CISOs and IT leaders must manage new risks while maintaining digital trust. In response, Cisco has highlighted four priority focus areas for organizations to consider to secure their AI applications as they scale deployments. This guidance outlines how security teams can adapt established application security practices to AI to help organizations reduce risk without slowing innovation.
The first area of focus is open source scanning. AI application development often relies on components such as open source models, public datasets, and third-party libraries. Although these resources accelerate development, they can contain vulnerabilities and malicious injections that can compromise the entire system. Regularly scanning these components can help identify and mitigate risks early in the development process.
The second area of focus is vulnerability testing. Static testing involves validating all components of an AI application (binaries, datasets, models, etc.) to detect potential vulnerabilities such as backdoors or contaminated data. Dynamic testing evaluates how your model behaves in different scenarios in your production environment. Cisco also recommends red teaming algorithms to simulate a wide range of adversarial techniques without the need for manual testing to improve model resiliency.
Third, application firewalls are emerging to address the unique safety and security risks posed by generative AI, particularly large-scale language models (LLMs). These AI-specific firewalls act as model-agnostic guardrails that monitor AI application traffic in transit to prevent failures and enforce policies. These help mitigate threats such as prompt injection, denial of service (DoS) attacks, and personally identifiable information (PII) disclosure.
Finally, data loss prevention (DLP) for AI applications is important given the dynamic nature of natural language content. Because traditional DLP approaches are insufficient, AI-focused DLP inspects both inputs and outputs to prevent sensitive data from being leaked. Input DLP can restrict file uploads, block copy-and-paste functionality, and restrict access to unauthorized AI tools. Output DLP utilizes guardrail filters to ensure that model responses do not contain PII, intellectual property, or other sensitive information.
“As AI adoption accelerates across the region, organizations are rapidly moving from pilot to production environments, and that transition also changes their risk profile. Securing AI applications requires going beyond traditional application controls and implementing data and third-party component feed models. The entire lifecycle of AI must be protected, right down to the operation of the model during actual use. By applying familiar security principles in an AI-specific manner, organizations in the Middle East can scale innovation with confidence while speeding deployment and reducing risks such as sensitive data leaks.”
– Fady Younes, Managing Director, Cybersecurity, Cisco Middle East and Africa, Turkiye, Romania, CIS
In summary, risks exist at nearly every stage of the AI lifecycle, from sourcing supply chain components to development and deployment. Cisco-recommended security measures (open source scanning, vulnerability testing, application firewalls, and AI-focused data loss prevention) address various aspects of this risk and form the foundation of a comprehensive AI security strategy. Together, these enable organizations to innovate securely while mitigating potential threats to AI deployments.
