Artificial intelligence is The way wars are fought changes rapidly, It outstrips the legal framework intended to govern it.
in ukraineDrones with autonomous navigation capabilities have increased the success rate of target attacks from about 10-20% to 70-80%. Israel has reportedly introduced AI to the target identification process. air strikes against Hamas In Gaza. Recently, Iranthe United States has relied on AI-enhanced intelligence, surveillance, and reconnaissance for battle management and targeting support.
These systems can compress decision timelines, expand situational awareness, and improve decision speed and accuracy. military operations. But as they proliferate, so do concerns. Policymakers are grappling not only with how these tools work, but also whether existing frameworks under international law are sufficient to govern their use.
“While states acknowledge that these do not specifically refer to AI, these are general rules that states must follow.” Legal obligation issues” said Netta Gusak, senior researcher at the Stockholm International Peace Research Institute.
But the challenge isn’t just the lack of AI-specific rules, Gussac said, but the lack of transparency. There is limited visibility into when and how militaries deploy AI, what safeguards are in place, and what humanitarian impact there will be. “This kind of opacity erodes trust.”
Most militaries continue to emphasize a human participation model, maintaining formal human control over the final decision to use force. In reality, however, the line between human judgment and machine execution is blurring as autonomous capabilities are increasingly integrated into weapons systems.
UN calls for ban on ‘killer robots’
Jerry Simpson, deputy director of Human Rights Watch, said that current international law does not prevent a final determination of machine killing. However, efforts are underway, particularly at the United Nations, to set clearer limits.
For almost 10 years, Government Expert Group on Lethal Autonomous Weapons Systems (GGE) focuses on the risks posed by these weapons. But progress toward common legal guardrails continues slowMeanwhile, developments on the battlefield have already surpassed initial discussions.
“Especially since the Ukraine war, we have seen that AI is not just part of weapons systems, but also part of broader military decision-making,” said Ingvild Bode, director of the Center for War Studies at the University of Southern Denmark.
Mr. Bode said that since the U.N. group began meeting in 2017, it has struggled to translate its discussions into written commitments to regulate lethal autonomous weapons systems, or autonomous lethal weapons systems, as they have come to be known.killer robot”
Over 60 countries Although they have signed the Declaration on the Responsible Military Use of AI, major powers like the United States and China are hesitant to enter into legally binding agreements, wary of constraining their own actions. strategic advantage In the development of AI-powered autonomous weapons and defense systems.
An important decision point is approaching. The UN group must decide in November whether to continue talks in their current format, move discussions to the UN General Assembly, or proceed with formal treaty negotiations led by specific countries.
The challenge, Simpson said, is that GGE operates by consensus, meaning that meaningful results require unanimous agreement. This increases the likelihood that negotiations will move elsewhere.
He said the choice was between remaining in a “bureaucratic and heavy process” that could take years to bring major military powers on board, or pursuing a faster, more flexible path, even if it meant excluding those powers initially.
Lessons from Anthropic vs. Pentagon
in Late FebruaryCalifornia-based Anthropic has refused approval from the U.S. Department of Defense. unlimited access AI model “Claude” for military applications. US President Donald Trump publicly criticized the decision, and Defense Secretary Pete Hegseth called the company a “supply chain risk.” Anthropic replied: file a lawsuit This is to prevent the Pentagon from putting it on a national security blacklist.
Since then, the company has stepped up its more cautious approach. It then restricted access to the latest model, Mythos, citing concerns that it could enable abuse, particularly cyberattacks.
This episode reveals a broader structural gap. As private companies play an increasing role in supplying AI to the military, decisions about security measures are largely made by the companies themselves, and oversight remains fragmented and uneven.
Bode said tech companies should be more concerned about making sure they sell reliable and compliant products, and that Anthropic’s lawsuit is “in some ways a good business case.” “Once you sell, you lose control. But you can still be embroiled in significant disputes.”
In Europe, Arthur Mensch, CEO of Mistral AI — currently Aiming to expand defense contracts – recently spoke in brussels The continent should ensure sovereign control over military AI.
“If these artificial intelligence systems are indeed being procured from foreign companies, our military could be crippled,” Mensch said.
However, he argued that customers are ultimately responsible for putting safeguards in place for how these systems are used.
Analysts see a broader pattern of burden shifting. Companies want clearer rules from states that have legal obligations. However, industry players also have significant knowledge about how these systems work and cannot be treated as neutral vendors.
“The responsible development and lawful use of these technologies depends on collaboration,” Gussac said. “This is a joint exercise between the state and industry, and neither can do it alone.”
sign up for of the Diet weekly newsletter
Every Friday, our editorial team goes behind the headlines to provide insight and analysis on the key stories driving the EU agenda. Subscribe for free here.
