Radware has released a new study analysing cybersecurity risks associated with increasing adoption of agent artificial intelligence (AI) systems in enterprise environments.
The report entitled “The Internet of the Internet: the next threat surface” examines how AI agents, driven by large-scale language models (LLMs), are integrated into business operations. These systems differ from standard chatbots by using protocols such as Model Context Protocol (MCP) and Agent-to-Agent (A2A) to act autonomously, perform tasks, and collaborate with other digital agents.
Attack surface expansion
Organizations are increasingly deploying LLM-powered AI agents into customer service, development and operational workflows. Unlike traditional software, these agents can autonomously infer actions, execute commands, and initiate actions across the enterprise network.
The report points out that when these agents interact with business systems, they establish a complex, transitive chain of access to sensitive resources. This allows existing cybersecurity measures to track and secure business processes. According to Radware, these routes represent “complex routes to sensitive enterprise resources that are difficult to track or protect existing defenses.”
New protocols and exposures
Adopting protocols such as MCP and A2A improves AI agents interoperability and scalability across various business processes, but this also introduces new risks. The report highlights threats such as rapid injection, tool addiction, lateral compromise and malicious handshakes, and utilizes these emerging protocols.
In particular, rapid injection attacks have been identified as an increased risk. By embedding secret instructions in content such as emails and web pages, attackers can manipulate AI agents to remove data and initiate malicious actions. Research shows that “enemies can embed hidden instructions in emails, web pages, or documents. When AI agents process that content, users can unconsciously remove data or trigger fraudulent actions without clicking the link or approving the request.”
Low barriers to cybercrime
The report observes the emergence of a new “dark AI ecosystem,” which reduces technical barriers to cybercrime. Black hat platforms such as Xanthoroxai provide access to offensive AI tools that automate previous manual attacks, such as malware creation and phishing campaigns. Available on a subscription basis, these tools make it easier for less experienced attackers to develop and deploy exploits.
Radware's analysis also shows that AI will accelerate the weaponization of new vulnerabilities. The report demonstrates that GPT-4 can develop work exploits for disclosed vulnerabilities faster and faster than experienced human researchers, demonstrating that IT teams can patch vulnerable systems before attackers attack.
Changes in the digital landscape
The emergence of the so-called “agent Internet” is likened to previous digital shifts such as the rise of the Internet of Things. This new ecosystem has increasingly interconnected autonomous digital actors with memory, inference and action capabilities, resulting in increased operational efficiency, but increased risk exposure.
Radware's report argues that organizations need to adjust their security models to explain the new role that enterprise AI agents play. These systems act as decision makers, intermediaries and operational partners, increasing the need for effective governance and security monitoring.
“We're not in the future of AI – we already live in it,” he said. [Insert Radware Spokesperson]. “The agent ecosystem is expanding rapidly across the industry, but without strong security and surveillance, these systems risk becoming a cybercrime conduit.
Security Recommendations
This report provides a set of enterprise recommendations to protect against the unique risks posed by autonomous AI agents. These include:
- Treat LLMS and AI agents as privileged actors, subject to strict governance and access control.
- Integrate red teams and rapid assessment exercises into the software development lifecycle.
- We evaluate protocols such as MCP and A2A as security critical interfaces, rather than just productivity tools.
- Monitor the dark AI ecosystem to keep you aware of how enemies are adapting and exploiting new tools.
- Investment in detection, sandboxing and behavioral monitoring technologies tailored to autonomous AI systems.
- We recognize that AI-driven defensive capabilities play an increasingly important role in combating AI-driven threats.
This report concludes by noting that AI agents are a significant technological shift for businesses. These systems have the potential for efficiency and economic growth, but as the perimeter of useful tools and security threats continues to blur, they also introduce risks that businesses must deal with urgently.
