This defense company created an AI agent that blows things up.

AI For Business


Like many companies in Silicon Valley today, Scout AI trains AI models and agents at scale to automate chores. The big difference is that instead of writing code, replying to emails, or buying things online, Scout AI’s agents are designed to seek out and destroy things in the physical world with exploding drones.

In a recent demonstration at an undisclosed military base in central California, Scout AI’s technology put a self-driving off-road vehicle and a pair of deadly drones in charge. Investigators used these systems to discover trucks hidden in the area and used explosives to shatter them.

“We need to bring next-generation AI to the military,” Colby Adcock, CEO of Scout AI, said in a recent interview. (Adcock’s brother, Brett Adcock, is CEO of Figure AI, a startup that develops humanoid robots.) “We’re taking a hyperscalar foundational model and training it from being a generalized chatbot or agent assistant to a combatant.”

Adcock’s company is part of a new generation of startups racing to adapt technology from large AI labs to the battlefield. Many policymakers believe that leveraging AI will be key to future military superiority. The combat power of AI is one of the reasons the U.S. government seeks to restrict sales of advanced AI chips and chip manufacturing equipment to China, but the Trump administration recently opted to ease those restrictions.

“It’s good to see defense technology startups pushing the envelope with AI integration,” said Michael Horowitz, a professor at the University of Pennsylvania. He previously served as the Deputy Assistant Secretary of Defense for Military Development and Emerging Capabilities at the Department of Defense. “If the United States is going to lead the military deployment of AI, that’s exactly what they need to do.”

However, Horowitz also points out that leveraging the latest advances in AI can actually prove particularly difficult.

Large-scale language models are inherently unpredictable, and AI agents, such as the one that controls the popular AI assistant OpenClaw, can misbehave even when given relatively benign tasks, such as ordering goods online. Horowitz said it may be particularly difficult to prove that such systems are robust from a cybersecurity perspective, which is necessary for widespread military use.

A recent demo of Scout AI included several steps that gave the AI ​​full control over the combat system.

At the start of the mission, the following commands were entered into the Scout AI system known as Fury Orchestrator.

Fury Orchestrator, send one ground vehicle to Alpha’s checkpoint. Perform a kinetic strike mission with two drones. Destroy the blue truck 500 meters east of the airfield and send a confirmation.

Relatively large AI models with over 100 billion parameters can run on secure cloud platforms or on-site air-gapped computers to interpret initial commands. Scout AI uses an unpublished open source model with limitations removed. The model acts as an agent, issuing commands to a small 10 billion-parameter model running on ground vehicles and drones participating in the exercise. The smaller model also acts as an agent in its own right, issuing its own commands to the lower-level AI systems that control the vehicle’s movements.

Seconds after receiving the order to march, the ground vehicle sped along a dirt road winding through brush and trees. After a few minutes, the vehicle stopped and two drones were launched and flew into the area where the target was indicated to be waiting. After spotting the truck, an AI agent operating on one of the drones flew toward the truck and gave the order to detonate an explosive just before impact.



Source link