
Researchers working at the companies behind the AI tools ChatGPT, Claude and Gemini are calling on companies to stop using AI technology for mass surveillance and fully autonomous weapons.Credit: Jaque Silva/NurPhoto/Getty
The attack on Iran by the United States and Israel reminded us how close artificial intelligence research is to the forefront. Reports of AI being used to prioritize goals are not new. In intense combat, every second counts, and those implementing the technology believe it has the potential to fire faster than humans, react more quickly to incoming fire, and minimize the need for direct communication with a command center. At last week’s Raisina Dialogue, India’s annual foreign policy conference, Anil Chouhan, the country’s chief of defense staff, and Romeo Browner of the Philippines emphasized that AI and automated systems are transforming warfare.

How AI is shaping the Iran war and what future conflicts will look like
Many AI researchers say that even the most advanced technologies, known as frontier AI models, still cannot perform reliably or operate within existing laws of war. Employees at OpenAI and Google, two of the technology companies developing such models, have gone public with their concerns. Their warnings must be heeded. As with any technology that has the potential to kill indiscriminately, AI should not be allowed to be used in warfare until specific rules governing its use are established.
Currently, there is no international law that specifically addresses the use of AI in warfare. However, international humanitarian law clearly states that weapons may not be used indiscriminately. Additionally, combatants must identify targets and take precautions to minimize the risk of civilian casualties. These requirements should apply to AI as well as other military technologies.
Last month, Anthropic, the San Francisco, California company that developed the AI model Claude, publicly clashed with the US Department of Defense (DoD). The Department of Defense requested the right to use Anthropic’s technology for lawful purposes. Anthropic denied permission to use the model in mass surveillance or weapons without human oversight, insisting that its model “can be performed reliably and responsibly.” The Department of Defense terminated the contract with Anthropic and formally designated the company a “supply chain risk.” This would exclude Anthropic from bidding on certain U.S. government contracts. In response, Anthropic filed a lawsuit.

AI weapons: Russia’s war in Ukraine shows why the world must enact bans
In contrast, rival AI company OpenAI, also based in San Francisco, agreed to the Pentagon’s version of the terms. More than 100 employees at Google and nearly 900 people at Google signed an open letter calling on the companies to reject the government’s request (see https://notdivided.org/). On Saturday, OpenAI’s head of robotics, Caitlin Kalinowski, resigned, saying, “AI plays an important role in national security. But surveillance of Americans without judicial oversight and lethal autonomy without human permission is a policy that deserves more consideration than it gets.”
as nature According to reports such as , many researchers have ethical and moral objections to the use of AI in mass surveillance and autonomous weapons. Others have practical concerns about the technology’s current capabilities. AI models can give inaccurate outputs, and how they reach their conclusions is often opaque. Incorporating AI into military technology may one day reduce civilian casualties, but the creators of some of the best AI models say this is not yet true.

Why we need to limit the autonomy of AI-powered weapons
Many staffers joined US AI companies with the understanding that their work would not be used by the military, or would only have limited use. However, in January 2024, OpenAI updated its policy to remove language that said its models could not be used for “military and war” purposes. Last February, Google backed away from a promise to prevent its AI from being used for surveillance or weapons. Additionally, Anthropic CEO Dario Amodei has previously proposed using AI to achieve military superiority against authoritarian states.
These developments clearly demonstrate the limits of voluntary initiatives that can be reversed at any time. Another approach is to create legally binding international agreements about what is permissible. There is precedent for scientists leading such efforts. Nuclear weapons were developed with little regulation or international agreement. Agreements such as the Treaty on the Non-Proliferation of Nuclear Weapons and the Treaty on the Prohibition of Nuclear Weapons are partly the result of scientist-led efforts. Under the Chemical Weapons Convention, the development or deployment of such weapons is illegal, and similar legal agreements exist to prevent the use of biological weapons.

UN creates new Scientific AI Advisory Committee: What will it do?
The Convention on Certain Conventional Weapons aims to address issues related to emerging weapons. Member states, including the United States, are meeting to discuss lethal autonomous weapons. This could ultimately lead to the creation of an international agreement specifically for AI. But for now, the development of such an agreement is hampered by a lack of support from China, Israel, and the United States, as well as a lack of agreement on the precise role of AI in war and what constitutes “acting autonomously.” Gathering such evidence should be the task of the United Nations Independent International Scientific Panel, which was appointed last month and is working on its first report.
OpenAI says the Department of Defense plans to convene a working group on the topic that will include military personnel, government officials, and leaders from the Frontier AI Institute. Researchers at these institutes must continue to call for limits on the use of AI in warfare and use their power to advocate for global rules. Treaty negotiations take time. Some argue that the pace of AI development means the world cannot afford to wait for slow diplomatic solutions. That’s not an argument for inaction. If you don’t start, it’s always too late.
