Human-machine teaming strengthens cross-border defenses

Machine Learning


The 2025 series of Decision Advantage Sprints for Human-Machine Teaming marked significant progress in integrating artificial intelligence and machine learning into battle management operations. Through a series of groundbreaking experiments, including the recent iteration of DASH 3, the U.S. Air Force, along with its coalition partners Canada and the United Kingdom, tested and refined the potential of AI to enhance decision-making, improve operational efficiency, and enhance interoperability in the face of growing global security challenges.

We understand that we cannot win the next conflict alone without the help of our machine teammates and the support of our allies. DASH 3 worked together in a coalition-led simulated combat scenario to demonstrate the value of these partnerships. The tools we tested are essential to maintaining decision-making advantage, and we look forward to expanding this collaboration at future DASH events,” said RCAF DASH 3 participant, Royal Canadian Air Force Col. Dennis Williams.

DASH 3, held at the unclassified location of Shadow Operations Center Nellis in downtown Las Vegas, set the stage for this collaboration led by the Advanced Battle Management Systems cross-functional team. The experiment, conducted in partnership with the Air Force Research Laboratory's 711th Human Performance Wing, U.S. Space Command, and the 805th Combat Training Squadron, also known as ShOC-N, further strengthens efforts to improve combat management capabilities for the future.

Integrating AI into business decision making

In the third installment of the DASH series, seven teams, six from industry and one from the ShOC-N innovation team, partnered with carriers in the United States, Canada, and the United Kingdom to test a variety of decision advantage tools aimed at enhancing the rapid and effective generation of multi-path combat actions. The goal of combat COA is to map courses of action that align with commander intent while overcoming the complexities of modern warfare, such as the fog of battle and friction. Examples of combat COAs include recommended solutions for long-range kill chains, electromagnetic battle management issues, space and cyber challenges, or agile warfare adoption such as aircraft re-basing.

U.S. Air Force Col. John Orlando, ABMS cross-functional team leader overseeing capability development, explained the importance of flexibility in COA generation: “For example, a bomber may be able to attack from multiple means of entry, each with unique risks and requiring different support assets such as cyber, ISR, etc.” [intelligence, surveillance, and reconnaissance]refueling, air defense suppression. Machines can generate multiple paths, supporting assets, complex uncertainties, timing, etc. The machine provides a rich solution space where many COAs are considered, but only some are executed, ensuring that options remain open as the situation evolves. ”

This ability to explore multiple COAs simultaneously enables rapid adaptation to unexpected challenges and provides operators with diverse strategies as the situation evolves. The integration of AI into this process aims not only to speed up the decision-making cycle, but also to increase the quality of the solutions produced.

Advantages of speeding up decision-making with AI

The speed with which AI systems can generate actionable recommendations has proven to be a game-changer in the decision-making process. The ability to create actionable options in seconds, instead of manually creating a COA that once took minutes, has proven to be a fundamental advantage in combat scenarios. Initial results from the DASH 3 experiment demonstrate the power of AI to enable faster and more efficient decision-making.

“We demonstrated that our AI system can generate a multi-domain COA that takes into account risk, fuel, time constraints, force packaging, and geospatial routing in less than a minute,” Orlando said. “Recommendations generated by these machines are up to 90% faster than traditional methods, and the best machine-class solutions demonstrated 97% feasibility and tactical validity.”

For comparison, human performance in generating a course of action typically takes about 19 minutes, with only 48% of the options considered viable and tactically effective. “This dramatic time reduction and improvement in solution quality highlights the potential of AI to significantly increase the speed and accuracy of the decision-making process, while still allowing humans to make the final decisions on the battlefield,” Orlando added.

The ability to quickly generate multiple actionable COAs not only increases the speed of decision-making but also provides commanders with more options to work within compressed time frames, making AI an essential tool for maintaining strategic advantage in fast-paced combat situations.

Building trust in AI: From skepticism to confidence

At the beginning of the DASH 3 experiment, skepticism about the integration of AI in operational decision-making was common. However, participating carriers have noticed a marked shift in perspective as DASH progresses. U.S. Air Force Lt. Ashley Nguyen, a 964th Airborne Air Control Squadron DASH 3 participant, initially expressed doubts about the role AI could play in such a complex process. “Given how difficult and nuanced it is to build a combat COA, I was skeptical that technology would be incorporated into decision-making,” Nguyen said. “But once we used the tool, we realized how user-friendly and time-saving it was. Rather than replacing us, AI gave us a solid starting point from which to build.”

As the experiment progressed, confidence in the AI ​​grew steadily. As operators gained hands-on experience, they began to see value in AI's ability to generate workable solutions at unprecedented speed. “Some of the outputs generated by the AI ​​were about 80% solutions,” Nguyen said. “They weren't perfect, but they were a good foundation. This increased my confidence in the system. AI became a tool to help generate starting points for decision-making.”

Trust and cooperation beyond borders

Collaboration with the United States and its coalition partners was highlighted throughout the 2025 DASH Series. The participation of UK and Canadian operators provided valuable perspectives and ensured that the decision support tools tested were able to meet a wide range of operational requirements.

“We understand that the next conflict cannot be won alone without the help of our teammate machines and the support of our allies,” said Canadian Air Force Col. Dennis Williams, a participant in RCAF DASH 3. “DASH 3 demonstrated the value of these partnerships as we worked together in a coalition-led simulated combat scenario. The tools we tested are essential to maintaining a decisive advantage, and we look forward to expanding this collaboration at future DASH events.”

This integration of human-machine teaming and participation in coalitions has highlighted the potential to improve multinational interoperability in the command-and-control battlespace. “The involvement of our coalition partners was critical not only to the success of DASH 3, but also to strengthening the alliance that supports global security,” said U.S. Air Force Lt. Col. Sean Finney, commander of the 805th Combat Training Squadron/ShOC-N.

Meeting the challenge: The illusion of weather and AI

The DASH 3 experiment was not just a test of new AI tools, but a continuation of a concerted effort to tackle persistent challenges, such as the integration of weather data and the potential for AI “hallucinations.” These issues are a focus area throughout the DASH series, with each iteration bringing new insights and improvements to ensure operational efficiency of AI systems.

Weather-related challenges are an important factor in real-world operations, but were not fully integrated into the DASH series due to simulation limitations. Instead, weather-related challenges were manually simulated by human operators through “white carding,” a method of incorporating scenario-based weather effects such as airfield closures and delays into experiments.

“We didn't overlook the role of weather,” Orlando explained. “Although it was not the main focus of this experiment, we fully understand its operational implications and are committed to integrating weather data into future decision-making models.”

The risk of AI hallucinations, where AI produces inaccurate or irrelevant output, especially when using large language models, was another challenge we addressed during the DASH 3 experiment. Aware of this potential issue, the development team took proactive steps to design an AI tool that minimized the risk of hallucinations, and organizers diligently monitored the output throughout the experiment.

“Our team did not observe any hallucinations during the experiment, which confirmed the effectiveness of the AI ​​system employed during the experiment,” Orlando said. “While this is a positive outcome, we remain vigilant about the potential risks, especially when utilizing LLMs who may not be trained in military-specific terminology and acronyms. We are actively refining our systems to mitigate these risks and ensure the reliability and relevance of our AI output.”

For the future: Building trust in AI for future operations

As the U.S. Air Force moves forward with the 2026 series of DASH experiments, lessons learned from the 2025 iteration will provide an important foundation for future efforts. Growing trust in human-machine collaboration, strengthening international partnerships, and continued improvements in AI tools all point to a future where AI plays a key role in operational decision-making.

“The 2025 DASH series establishes a strong foundation for future experiments and has the potential to further expand the role of AI in battle management,” said Orlando. “By continuing to build trusted relationships with carriers, improve AI systems, and foster international cooperation, the United States and its allies are taking important steps to ensure we are prepared to meet the evolving challenges of modern warfare.”

“This is just the beginning,” Williams said. “The more we can incorporate AI into our decision-making processes, the more time we have to focus on the human aspects of war. These tools are key to staying ahead of our adversaries and maintaining peace and stability on a global scale.”



Source link