F5 Tighten data leak screws with AI application delivery

Applications of AI


The cloud took time. After cloud computing established its early gambits (reliant on lower capital expenditure promises through the migration to service-based computing and data storage), the IT industry missed the headache of security, scalability and service suitability and worked throughout the period of teething. The rise of artificial intelligence has gone through a similar adolescence.

Market analysis from application delivery and security platform company F5 suggests that two-thirds of organizations can now demonstrate a level of “medium AI preparation” but lacks robust governance and cross-cloud management capabilities related to performance, integration and security. The company's latest 2025 State of AI Application Strategy Report is an additional study conducted with 150 AI strategists, combining feedback from 650 global IT leaders, all representing an organization that earns at least $200 million in annual revenue.

Which AI Firewall Crisis?

Perhaps this issue arises from where AI is today as an experimental prototyping technology used for random web-centric research, chatbot experiences within social media apps, and image generation entertainment. If anything, it may be argued that the non-engagement use of AI services is early familiarity with very powerful technologies that need to be locked down inside corporate control mechanisms when deployed in the workplace.

This proposal has been tested probably by the F5 estimate. Today, 71% of organizations use AI to increase security, while only 31% deploy AI firewalls. As AI is the core of your business strategy, preparation requires more than experimentation. Security, scalability and alignment are required. The proposal from F5 states that the average organization uses three AI models. Typically, the use of multiple models is correlated with deployment in multiple computing environments or locations.

Fight fire with fire

As a company, F5 has been working for some time to build architectural alignment of the new AI-era platform. After a specific Updates in this direction Earlier this year, the company now details new AI-driven features in its F5 application delivery and security platform. Fighting Fires in Fire (AI Risk) (platform extensions include features such as the F5 AI Gateway Service to protect against data leaks), the company offers new features with its F5 Big-IP SSL Orchestrator.

Part of the middleware, AI gateways act as filtering tools for inspecting and verifying data prompts between AI models and the large language models that serve them. AI gateways that oversee all interactions between AI services and language models see potentially chaotic data exchanges, bringing order to reduce laws and achieve efficient use, secure operation and responsible AI.

This year, F5 President and CEO François Rocoh-Donou, François Rocoh-Donou, highlighted the progress made on the company's application delivery and security platform, and spoke to London press this week to explain where his company's vision for safe operations across the new AI landscape will actually emerge. The F5 application delivery and security platform is designed deeper to ensure that CIOs, CISOs, AI OPS users, and all engineers in the modern DevOps team working in hybrid multi-cloud infrastructure manage the key infrastructure, data movement and security challenges they face.

F5 CEO: Why did complexity happen?

“What's really happening in the world of application delivery today is that the organizations we work with (mainly large corporations and government agencies) have made systems more complicated over the past decades,” Locoh-Donou said. “It's mainly due to the fact that businesses have established cloud and data center real estate across many different service providers. So there are multiple elements of the infrastructure they manage. This truth combines the fact that it consists of multiple APIs and microservices. The multifaceted product itself creates a “ball of fire” in terms of system management and total delivery of applications. ”

Instead of taking incremental steps to address these challenges, Locoh-Donou suggests that taking a single platform approach makes more sense to deliver and secure applications on-premises, public cloud, and to the edge. He argues that organizations do not need to choose a different application delivery infrastructure to drive successful applications with different form factors.

With all that background, adding AI to the mix makes things even more difficult as these applications are inherently distributed (usually calling data models from multiple sources and agent AI becomes an agent-2 agent, adding even more dynamic behavior to that vortex).

“My view on this is that AI is deployed so quickly, so we should have seen more closely what happened in the first decade of cloud computing. I said that what started with the arrival of ChatGPT in November 2022 came back in November 2022, so the speed of progress was significantly faster. “I think that generation AI could be the most sensitive vulnerability an organization has to manage now. Using an AI gateway, route traffic to the right LLM and apply policies to the AI engine (organizations can use this process, you can manage costs per token).

Alert Alarm Fatigue

For the F5, Locoh-Donou says his team is on a journey to inject AI into the platform to provide AI to application delivery controller technology. This is to make it much easier (via natural language interface) so that customers can safely deliver the apps they need. This year, the company acquired Fletch.AI to help managers overcome the “alarm fatigue” problem.

Locoh-Donou also points out that as businesses adopt AI and hybrid cloud technologies, sensitive data often moves across encrypted traffic and unauthorized AI tools, creating security blind spots. Traditional security methods struggle to detect or prevent data leaks from these complex environments. He says F5 will answer this challenge and enable organizations to stop key compliance and security outcomes in real time, including the ability to detect, classify and stop data leaks of encrypted and AI-driven traffic. It also addresses risks from unauthorized AI use (also known as Shadow AI) and sensitive data exposures. Works with controls to enforce consistent policies across applications, APIs, and AI services to maintain security and compliance.

Endogenous intravenous transport data

Data leak detection and prevention features will be coming to the F5 AI Gateway this quarter. The service is driven by technology acquired by F5 from Leaksignal, a data governance and protection specialist with national standards and technology recognition for data classification, remediation, and AI-driven policy enforcement of data in transit. The new feature examines AI prompts and responses to sensitive data, such as personal and other sensitive data, and applies customer-defined policies to edit, block, or log.

With the integration and continued development of this AI data protection technology, F5 says it is expanding its ability to inspect in-transit data and apply policies to protect sensitive information before leaving the network. This addition is promised to simplify compliance and reduce risk across hybrid and multicloud deployments.

Competitive Analysis: Applications and API Delivery

Holding hands on application delivery and security (from a salt-worthy vendor) provides an equally thorough approach to application programming interface management and security, along with AI gateway capabilities. F5 shares bench space in this sector with companies including Kong, Cloudflare, Akamai and Google Cloud (of three major cloud hyperschools), but Microsoft Azure and AWS also have fingers on pie. While each company has capabilities and cost schedules, more obvious differentiation appears in that each vendor can expand into the edge computing space and, importantly, use AI accelerators and intelligence boosters.

Pure Play Application Delivery Controller Competition comes from AWS. This time, it's alongside Barracuda, Haploxy, Nesscalar, A10 Network, Radware, and (back to Hyperscaler again) Microsoft Azure. All three hyperschools are known for their AI inference routing capabilities. A feature that checks that the application's resource requests match the parameter requests of a particular deployment correctly. It's not an alternative to application delivery controllers, but it's certainly another element of this market mix.

Cloud Major Player size naturally sits behind the heart of the F5. This is because it extends the platform vision. Large service providers can bundle some choices of what F5 offers as a standalone service (although it is a platform in itself).

Combining the data in transit and real-time data with the need to provide control over every layer of application execution, the target clearly has ample surface area. What's important now is whether a “safe and secure AI” space will grow as quickly as the larger AI landscape itself.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *