How the U.S. military is preparing for enemy AI cyberspace attacks

AI For Business


The attack was faster than the human enemy.

Communications and data networks critical to U.S. Army operations in the Asia-Pacific region have been probed by a new type of enemy seeking to confuse and trap soldiers.

That’s what Army leaders, guided by America’s top AI companies, saw in a new series of tabletop exercises to prepare for a new era of AI-enhanced cyber operations and how to effectively protect themselves.

It’s the latest example of how the Army is embracing artificial intelligence at every level of combat, and the latest recognition that the challenges of future warfare may be too fast for humans to tackle alone.

The Army and various partners held their second artificial intelligence tabletop exercise earlier this week, the first being last September. The first iteration brought together about 15 CEOs from leading AI companies to propose solutions to real-world problems, such as using AI capabilities in conflict environments where communications and networks are denied by the enemy, expediting supply chain management, and managing behind-the-scenes paperwork so civilians and personnel can focus on other tasks.

The exercise specifically focused on the Army’s cyber defenses using AI to prepare for “an Indo-Pacific crisis and the September 2027 scenario,” he said. “Rather than using AI to launch a one-time, decisive cyber attack, the adversary is leveraging AI to continually adapt to the Army’s defense posture and perhaps repeat the volley more quickly.” A human defender would be able to catch up. ”

Army leaders like Secretary of State Dan Driscoll have previously noted the growing importance of the military defending against enemy attacks on networks, data and software, arguing that it is just as important as defending physical assets and terrain.


People sit around a table at a meeting.

Fourteen companies participated in the exercise, including Google, OpenAI, and Microsoft.

Photo by U.S. Army Corporal. Gisele Gonzalez



This time, 14 companies gathered at the table, including executives from Google, OpenAI, Microsoft, Amazon Web Services, Palo Alto Networks, and more. Officials from the Army and the U.S. Department of Defense were also in attendance. Lt. Gen. Chris Eubank, commander of the Army’s Cyber ​​Command, said that in addressing this scenario, “the focus was really on how we can more effectively defend through artificial intelligence and frontier models” and the use of AI agents.

A variety of ideas and solutions emerged, but a recurring one focused on a combination of AI agents’ abilities in deception tactics, such as using AI to detect adversaries within U.S. systems, learning from adversary behavior, and forcing them to spend time and resources on obstacles. The exercise also highlighted what Army leaders described as previously unknown vulnerabilities in Army systems.

The simulated enemy’s AI system was analyzing the Army’s defenses in real time, seeing what triggered human intervention and slowing down the response, and learning from each iteration. This showed that in a potential conflict, adversaries could use artificial intelligence to attack cybersecurity in waves while continuously adapting to U.S. defenses.

The tabletop also raised the issue of risk tolerance regarding the use of AI. “At what stage is the machine at? [AI] “Are we allowing agents to accept risks rather than humans accepting risks?” Eubank said. What are the best uses of AI in the cybersecurity field, and is it possible for AI agents to perform certain functions on their own?

U.S. military officials and experts are questioning the broader role of AI and whether it can or should operate independently in certain capacities, amid concerns that the speed of decision-making in future wars against AI-powered adversaries will be too fast for humans. Army leadership is now encouraging the use of artificial intelligence for a variety of tasks, from paperwork to coding, all of which require human involvement.

Following the tabletop exercise, the service expects to take a closer look at the role of AI in cybersecurity and how much leeway it should be given.

“If we believe that using AI to augment humans is the end state, we will be far behind,” Eubank said. “We have to get to a point where we’re not just augmenting humans. Where can AI do things autonomously in the cyberspace defense environment?”