U.S. military relies on AI in attack on Iran, but this technology does not reduce the need for human judgment in war

Applications of AI


Thanks to the use of artificial intelligence, the US military was able to “strike a staggering 1,000 targets in the first 24 hours of the attack on Iran,” according to the Washington Post. The military used Anthropic’s AI tool, Claude, in conjunction with Palantir’s Maven system for real-time targeting and target prioritization in support of combat operations in Iran and Venezuela.

Although Claude is still a few years old, the U.S. military’s ability to use Claude and other AIs did not emerge overnight. The effective use of automated systems depends on extensive infrastructure and skilled human resources. The United States is able to use AI in war today because of decades of investment and experience.

My experience as an international relations scholar studying strategic technology at Georgia Tech and previously as a U.S. naval intelligence officer has taught me that a digital system is only as good as the organization that uses it. Some organizations are wasting the potential of advanced technology, while others are able to compensate for the technology’s weaknesses.

Myths and realities of military AI

Science fiction stories about military AI are often misleading. Popular ideas about swarms of killer robots and drones tend to exaggerate the autonomy of AI systems and underestimate the role of humans. Success or failure in war usually depends not on the machine but on the people using it.

In the real world, military AI refers to a vast collection of different systems and tasks. The two main categories are autonomous weapons and decision support systems. Automated weapons systems have the ability to select and engage targets on their own. These weapons are often the subject of science fiction and are the focus of considerable debate.

In contrast, decision support systems are now central to most modern militaries. These are software applications that provide intelligence and planning information to human personnel. Many military applications of AI, including the current and recent wars in the Middle East, are aimed at decision support systems rather than weapons. Modern warfighting organizations rely on countless digital applications for intelligence analysis, campaign planning, battle management, communications, logistics, administration, and cybersecurity.

Claude is an example of a decision support system, not a weapon. Claude is built into the Maven smart system, which is widely used by military, intelligence, and law enforcement agencies. Maven uses AI algorithms to identify potential targets from satellite and other intelligence data, and Claude helps military planners sort through information to determine targets and priorities.

Israel’s Lavender and Gospel systems, such as those used in the Gaza War, are also decision support systems. These AI applications provide analysis and planning support, but humans ultimately make the decisions.

Researcher Craig Jones explains how the US military is using artificial intelligence to attack Iran and some of the issues that arise from its use.

Long history of military AI

Weapons with some degree of autonomy have been used in warfare for more than a century. Nineteenth-century naval mines exploded on contact. German buzz bombs during World War II were guided by gyroscopes. Homing torpedoes and heat-seeking missiles alter their trajectory to intercept maneuvering targets. Many air defense systems, such as Israel’s Iron Dome and the US Patriot system, have long offered fully automatic modes.

Robotic drones have become popular in 21st century warfare. Today, unmanned systems perform a variety of “tedious, dirty, and dangerous” tasks on land, sea, air, and orbit. Remotely piloted vehicles such as the U.S. MQ-9 Reaper and Israel’s Hermes 900 can loiter autonomously for long periods of time, providing platforms for reconnaissance and attack. Combatants in the Russian-Ukrainian war pioneered the use of first-person drones as kamikaze weapons. Some drones rely on AI to acquire targets, as electronic jamming makes remote control by a human operator impossible.

But systems that automate reconnaissance and attack are only the most visible part of the automation revolution. The ability to see further and attack faster greatly increases the information processing burden on military organizations. This is where decision support systems come into play. If autonomous weapons strengthen the eyes and arms of the military, decision support systems strengthen the brain.

Cold War era command and control systems anticipated modern decision support systems like Israel’s AI-enabled Tzayad for battle management. Automation research projects such as the Semi-Automatic Ground Environment (SAGE) in the United States in the 1950s led to important innovations in computer memory and interfaces. During the U.S. war in Vietnam, Igloo White collected intelligence data into a central computer to coordinate U.S. air strikes against North Vietnamese supply lines. The Defense Advanced Research Projects Agency’s Strategic Computing Program in the 1980s spurred advances in semiconductors and expert systems. In fact, defense funding originally enabled the rise of AI.

Organizations enable automated warfare

Automated weapons and decision support systems rely on complementary organizational innovations. From the electronic battlefield in Vietnam to the AirLand Battle Doctrine of the late Cold War and the subsequent concept of network-centric warfare, the U.S. military has developed new ideas and organizational concepts.

Of particular note is the emergence of a new style of special operations in the United States’ global war on terror. AI-enabled decision support systems have become invaluable for finding terrorist operatives, planning attacks to kill or capture them, and analyzing information collected in the process. Systems like Maven have become essential to this style of counterterrorism.

The impressive American methods of war seen in Venezuela and Iran are the result of decades of trial and error. The U.S. military has honed a complex process for gathering intelligence from many sources, analyzing target systems, evaluating attack options, coordinating joint operations, and assessing bomb damage. The only reason AI can be used throughout the targeting cycle is because there are countless humans everywhere working to keep it running.

AI raises important concerns about automation bias, or the tendency for people to place undue emphasis on automated decision-making, in military objectives. But these are not new concerns. Igloo White was often misled by Vietnamese decoys. In 1988, a state-of-the-art American Aegis cruiser accidentally shot down an Iranian airliner. In 1999, an intelligence error caused a U.S. stealth bomber to accidentally bomb the Chinese embassy in Belgrade, Serbia.

Misanalytical errors and cultural biases within the U.S. military have led to the deaths of many Iraqi and Afghan civilians. Most recently, evidence suggests that a Tomahawk cruise missile struck a girls’ school next to an Iranian naval base, killing about 175 people, mostly students. This targeting may have resulted from a failure of the U.S. intelligence community.

Automatic predictions require human judgment

The successes and failures of decision support systems in war are due more to organizational factors than to technology. While AI can help organizations become more efficient, it can also amplify organizational bias. While it may be tempting to blame lavender for the excess civilian deaths in the Gaza Strip, Israel’s lax rules of engagement are likely more important than automation bias.

As the name suggests, decision support systems support human decision making. AI will not replace humans. Human personnel still play a critical role in designing, managing, interpreting, validating, evaluating, remediating, and securing systems and data flows. The commander is still in command.

From an economic perspective, AI improves forecasting. This means generating new data based on existing data. However, forecasting is only one part of decision making. People ultimately make the key decisions about what to predict and how to use the predictions. Humans have preferences, values, and preferences regarding real-world outcomes that AI systems inherently do not have.

In my view, this means that the increasing military use of AI is actually making humans more important in warfare, not less.



Source link