How AI firewalls protect new business applications

Applications of AI


ai-tunnel-getty

Blackjack 3D/Getty Images

AI and cybersecurity have been closely linked for many years. The good guy uses his AI to analyze incoming data his packets and block malicious activities. Bad actors, on the other hand, use AI to find and create gaps in their targets' security. AI is contributing to an increasingly intense arms race.

AI has been used to enhance defense systems by analyzing vast amounts of incoming traffic at machine speed and identifying known and emerging patterns. As criminals, hackers, and nation-states deploy increasingly sophisticated attacks, AI tools can be used to block some of these attacks, escalating only the most critical or complex attack behaviors to help human defenses. We support people.

Related article: How AI can leverage diversity to improve cybersecurity

But attackers also have access to AI systems and are becoming more sophisticated at both finding exploits and using technologies such as AI to force a cadre of criminal masterminds to proliferate. . That may sound like an exaggeration, but there seems to be no shortage of extremely talented programmers who are using their talents to attack infrastructure, with bad actors motivated by money, fear, and ideology to cause damage.

None of this is new; it's been an ongoing challenge for years.what is this teeth new: Added a new type of target: Business Value AI Systems (mostly referred to as chatbots). This article explores how we have used firewalls in the past to protect business value, and how new types of firewalls are currently being developed and tested to protect unique operational and trust challenges. Let me explain the background of what is being done. About AI chatbots in the commercial sector.

Understand firewalls

The types of attacks and defenses practiced by traditional (yes, it's been long enough to call it “traditional”) AI-based cybersecurity occur at the network and transport layers of the network stack. Masu. The OSI model is a conceptual framework developed by the International Organization for Standardization to understand and communicate the various operational layers of modern networks.

Oshi model

David Gewirtz/ZDNET

The network layer routes packets across the network, and the transport layer manages data transmission to ensure reliability and flow control between end systems.

Also: Want to work in AI? How to pivot your career in 5 steps

Traditional attacks that occur at Layer 3 and Layer 4 of the OSI network model, respectively, are much closer to the network hardware and cabling and far removed from the application layer, Layer 7. Most of the applications that we humans rely on on a daily basis operate far above the application layer. There is another way to think about this. The network infrastructure plumbing resides in the lower layers, but the business value resides in Layer 7.

The network and transport layers are like underground chains of interconnecting caverns and passageways that connect buildings within a city, acting as conduits for things like shipping and waste disposal. The application layer is like a beautiful storefront where customers shop.

In the digital world, network firewalls have been at the forefront for years, defending against layer 3 and 4 attacks. It scans data as it arrives, determines if the packets have hidden payloads, and can block activity from particularly problematic locations.

Also: Employees enter sensitive data into AI tools that generate it at their own risk.

But there's another type of firewall that's been around for a while: the Web Application Firewall (WAF). Its role is to block activity that occurs at the web application level.

WAF monitors, filters, and blocks malicious HTTP traffic. Prevents SQL injection and cross-site scripting (XSS) attacks, injection flaws, broken authentication, and sensitive data leaks. Provides custom rule sets for application-specific protection. Mitigates DDoS attacks, among other protections. In other words, it prevents bad people from doing bad things to good web pages.

AI firewalls are starting to emerge that protect Level 7 data (business value) at the AI ​​chatbot level. Before discussing how firewalls protect that data, it's helpful to understand how AI chatbots can be attacked.

When bad people attack good AI chatbots

Over the past year or so, practical and actionable generative AI has emerged. This new AI variant does not only exist in ChatGPT. Companies are deploying it everywhere, but especially on the customer-facing front end for user support, voluntary sales assistance, and even medical diagnostics.

And AI is transforming organizations everywhere.How are these six companies leading the way?

There are four approaches to attacking AI chatbots. These AI solutions are so new that these approaches are still mostly theoretical, but we expect to see real-world hackers following these paths in the next year or so.

Hostile attack: The ScienceNews journal discusses how exploits can attack the way AI models work. Researchers are building phrases and prompts that seem to work for AI models, but are designed to manipulate their responses or cause some sort of error. The goal is to ensure that AI models can divulge sensitive information, break security protocols, or respond in ways that could be used to embarrass operators.

I described a very simplistic variation of this type of attack when a user entered a misleading prompt into an unsecured chatbot interface at Chevrolet of Watsonville. Things didn't go well.

Indirect prompt injection: More and more chatbots are reading active web pages as part of a conversation with a user. These web pages can contain anything. Typically, when an AI system scrapes the content of a website, it is smart enough to distinguish between the human-readable text that contains the knowledge it needs to process, and the supporting code and directives to format the web page.

Also: We’re not ready for how generative AI will impact elections

However, attackers may attempt to embed instructions or formatting into these web pages. anything AI models can be manipulated to reveal personal or sensitive information. This is a potentially big danger since AI models rely heavily on data from the vast internet. Researchers at MIT investigated this issue and concluded that “AI chatbots are a security disaster.”

Data poisoning: I'm pretty sure this is where large-scale language model (LLM) developers go out of their way to shoot themselves in virtual feet. Data poisoning is the act of injecting inappropriate training data into a language model during development, and is essentially the equivalent of taking a geography class at the Flat Earth Society about the spherical nature of the Earth. The idea is to push false, false, or intentionally misleading data during the formation of the LLM so that it later spits out false information.

My favorite example is when Google licensed content from Stack Overflow for Gemini LLM. Stack Overflow is one of the largest online developer support forums with over 100 million developers. But as any developer who has spent more than 5 minutes on this site knows, for every clear and helpful answer there are 5-10 idiotic answers, and perhaps 20 more that overshadow all the answers. claiming legitimacy.

Related article: Best VPN services of 2024: Tested by experts

Training Gemini with that data means that Gemini not only has a ton of unique and valuable answers to all kinds of programming problems, but also a huge collection of answers that give terrible results. It will be.

Now imagine if a hacker knew that Stack Overflow data is regularly used to train Gemini (because it's been covered on ZDNET and other tech-related media). Masu). They can create questions and answers that are intentionally designed to mislead Gemini and its users.

Distributed denial of service: If you didn't think DDoS could be used against AI chatbots, think again. All AI queries require huge amounts of data and computing resources. If a hacker sends a large number of queries to a chatbot, it can become slow to respond or freeze.

Additionally, many vertical chatbots license AI APIs from vendors such as ChatGPT. For licensees paying with metered access, a high rate of false queries can increase costs. If hackers artificially increase the number of API calls used, API licensees may exceed their license allocation or face significantly increased charges from their AI provider.

Defense against AI attacks

Chatbots are becoming a critical component of business value infrastructure, so continuous operation is essential. The integrity of the business value delivered must also be protected. This has led to a new form of firewall designed specifically to protect AI infrastructure.

And how does ChatGPT actually work?

Generative AI firewalls are just beginning to emerge, such as the Firewall for AI service announced by edge network security company Cloudflare. Cloudflare's firewall sits between your application's chatbot interface and the LLM itself, intercepting API calls from your application before they reach the LLM (the brains of your AI implementation). The firewall also intercepts responses to API calls and ensures that those responses are validated for malicious activity.

Among the protections provided by this new form of firewall is Sensitive Data Discovery (SDD). SDD is not new to web application firewalls, but the likelihood of a chatbot unintentionally surfacing sensitive data is quite high, so applying data protection rules between AI models and business applications provides an important layer of security. will be added.

Additionally, this prevents users of chatbots (such as internal employees) from sharing sensitive business information with AI models provided by external companies such as OpenAI. This security mode helps prevent information from entering the public model's entire knowledge base.

Also, is AI in software engineering having an “Oppenheimer moment”? Here’s what you need to know

Once deployed, Cloudflare's AI firewall is also intended to manage model exploitation, which is a form of prompt injection and adversarial attacks aimed at destroying the model's output. Cloudflare specifically recommends the following use cases:

A common use case from AI Gateway customers is that they want to avoid generating harmful, offensive, or problematic language in their applications. The risks of not controlling model results include reputational damage and harm to end users by providing unreliable responses.

There are other ways a web application firewall can mitigate attacks. Especially when it comes to mass attacks like query bomb attacks, it's effectively his special purpose DDoS. Firewalls employ rate-limiting features that reduce the speed and volume of queries and filter out queries that appear to be specifically designed to break through the API.

Totally not ready for prime time

Cloudflare says customers can now deploy protection against high-volume DDoS-style attacks and detection of sensitive data. However, the prompt validation feature (essentially the AI-centric feature of AI Firewall) is still in development and is expected to enter beta in the coming months.

Also: Generative AI surprises us in 2023 – but all the magic comes at a price

Normally we don't want to talk about products at this early stage of development, but we're starting to see how AI enters mainstream business application infrastructure usage, becomes the target of attacks, and is getting substantial work done. I think it is important to introduce what happened. It is being done to provide AI-based protection.

stay tuned. We will be tracking the adoption of AI and how it is changing the contours of the world of business applications. We'll also look at security issues and how businesses can keep their deployments secure.

IT has always been an arms race. AI has simply brought new types of weapons to deploy and defend.


Follow me on social media for updates on my daily projects. Be sure to subscribe to my weekly newsletter on Substack and follow me on Twitter. @DavidGewirtzon Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz and on YouTube at YouTube.com/DavidGewirtzTV.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *