Iran and the growing danger of AI in war

Applications of AI


Unlock Editor’s Digest for free

In Iran, AI has entered the battlefield. The U.S. military uses this technology to enhance decision-making, sift through large amounts of data to identify targets, and improve military logistics. Inevitably, such conflicts become testing grounds for cutting-edge technology. This only highlights the urgent need for effective governance, along with clear boundaries to limit how and when AI is used in weapons systems.

One of the risks lies in poorly managed data, which is the lifeblood of all AI systems. A model is only as good as the information used to train it. There is still no evidence that AI was at fault in the recent devastating missile attack on a girls’ school in southern Iran, but the investigation is expected to focus on how the data used to select targets is verified.

Another risk is that people tasked with making life-or-death decisions based on recommendations from AI systems may find the machines difficult to second-guess. Some experts warn that this may already be happening in the Iran conflict, given that it is difficult for humans to understand all the factors that go into evaluating AI models.

There is also growing pressure to force humans out of the loop entirely, for example in situations where it is impossible for humans to maintain direct control of weapons due to scrambled communications links by the enemy. The United States is already working on developing fully autonomous drones to protect Taiwan from potential Chinese attacks.

This gives new urgency to the need for legislation to limit the use of autonomous lethal weapons systems. At least for now, a suspension of their use seems appropriate, and eventually a complete ban may be necessary.

One reason is lack of reliability. Today’s AI is based on so-called hallucination-prone probabilistic systems that are not reliable enough to trust in the forces that govern life and death. That’s one of the reasons Anthropic opposed handing over full control of its AI to the Department of Defense for classified use, and the US government took strict action earlier this month ordering the company’s technology out of military supply chains.

There are also moral arguments against autonomous weapons. In other words, we should not subcontract decisions that take lives to machines. However, even with full autonomy, someone somewhere will still be responsible for firing the weapon and potentially maintain direct responsibility. A moral case could also be made in favor of autonomous weapons if they could be shown to reduce human error that causes civilian casualties.

The strongest argument for limiting the spread of AI on the battlefield is that, like any innovative technology, it has the potential to completely change the nature of war itself. It’s impossible to predict everything how that will happen, but some of the possibilities that have been suggested are truly frightening, such as using swarms of low-cost drones to track and kill large numbers of enemies.

A good start would be for the world’s AI superpowers, the United States and China, to agree on practical limits on the use of autonomous weapons and ways to prevent the proliferation of this technology.

Another will be the conclusion of a United Nations group that has spent years discussing the possibility of an international ban on autonomous weapons, similar to the ban on biological and chemical weapons. The need for action is becoming increasingly urgent. AI’s inroads into warfare will only accelerate with each new international conflict.



Source link