Increasing Concern over Very Dangerous Use of AI and Related Technologies for Weapons and Wars

Applications of AI


On April 29 Senator Bernie Sanders warned the USA and China against the possibility of cataclysmic impacts of AI technology and said that continuing stiff competition of the two great powers in this rapidly developing technology can lead humanity to ‘lose control’. Instead he urged the two countries towards a path of cooperation on this important issue to avoid huge dangers to humanity.

           Speaking at the same event in Washington, David Krueger of the University of Montreal that the rapid developments of AI have been like summoning an alien species, one that is much smarter than us and can do things we have not even been able to conceive of. This leads to eroding the relevance of human beings.

           It is some relief to know that some experts having influence in the US government also realize the dangerous implications of indiscriminate development of AI. Senators Josh Hawley and Marsha Blackburn have tried to introduce legislation that restricts AI to sectors like medicine and banking. Steve Bannon has referred to AI as “one of the most dangerous technologies in the history of mankind” and strongly opposed its indiscriminate use.

         The highly tragic death of about 160 schoolgirls in an AI targeted attack ( in which about 95 other persons were also injured) on Sharjareh Tayyabbeh ( translated as The Sacred Tree) school in Minab city of Iran on February 28 2026 has led to many more people coming forward to oppose the use of AI for weapons and wars. This included a number of UN experts who came together to strongly condemn this attack “on children and on education”.

        Earlier tragic experiences in Gaza had confirmed the fears of those scientists who have been warning about the use of AI and robotics in weapon systems and warfare. The use of AI in targeting systems was supposed to facilitate more precise targeting and thereby spare innocent civilians– using Lavender system to target personnel and Gospel system to target buildings where militants were supposed to be located. However this AI technology, used hastily, actually resulted in a lot of indiscriminate killing of innocent civilians.

           These are not the only recent examples of high risk use of AI weapons. Highly scary but fact-based warnings by well-recognized experts have been followed by increasing investments by big powers to strengthen their preparations for developing a wide range of AI/robot weapons.

         While some of the civilian applications of robots have also faced increasing criticism regarding fears of large scale unemployment likely to be caused by them in several lines of work in a world already suffering from the adverse impacts of jobless growth, the adverse impacts of military use of robots are likely to be even more dangerous. Yet one of the arguments given for not checking military use of robot weapons ( also called lethal autonomous weapons or LAWs) is that work for civilian and military use of robots, particularly in the context of scientific research and innovation, can be closely related. The message given is that as civilian research on robots advances, there will be accompanying implications for military use of robots which cannot be ignored by any leading military power.

         Hence on the one hand it is stated that civilian advances in robots by cutting costs and offering other narrow advantages regardless of social costs will inevitably lead to the spread of robotics in civilian applications and on the other hand it is stated that the military possibilities that arise in the context of this technological development will equally inevitably be used by various military establishments in various parts of the world. Of course military establishments are also investing a lot in specifically military development of robots. In the USA several new start-ups are appearing to take forward the increasing willingness of Pentagon to invest in AI weapons, and other powers are unlikely to lag behind.  

         Much earlier in 2012-13, as a part of the efforts of the International Committee for Robot Arms Control, as many as 270 computing experts, AI experts and engineers had called for a ban on the development and deployment of weapon systems that make the decision to apply violent force autonomously, without any human control. They said clearly that the decision about the application of violent force should not be delegated to machines. These experts questioned how devices controlled by complex algorithms will interact, warning such interactions could create unstable and unpredictable behavior that can initiate or escalate conflicts or cause unjustifiable and serious harm to civilian populations.              

            In August 2017 as many as 116 specialists from 26 countries, including some of world’s leading robotics and artificial intelligence pioneers, called on the United Nations to ban the development and use of killer robots.  They wrote, “Once developed lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend. These can be weapons of terror, weapons than despots and terrorists use against innocent population, and weapons hacked to behave in undesirable ways.”

          “We do not have long to act.” This letter warned. “Once this Pandora’s Box is opened, it will be hard to close.”

          Ryan Gariepy, the founder of Clearpath Robotics, has said, “Unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapon systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”

          The Economist (January 27, 2017) noted in its special report titled ‘The Future of War’, “At least the world knows what is like to live in the shadow of nuclear weapons. There are much bigger question marks over how the rapid advances in artificial intelligence (AI) and deep learning will affect the way wars are fought, and perhaps even the way people think of war. The big concern is that these technologies may create autonomous weapon systems that can make choices about killing humans independently of those who created or deployed them.”

          This special report distinguished between three types of AI weapons or robot weapons (i) in the loop (with a human constantly monitoring the operation and remaining in charge of critical decisions, (ii) on the loop (with a human supervising machines that can intervene at any stage of the mission) or (iii) out of the loop (with the machine carrying out the mission without any human intervention once launched).

          Fully autonomous robot weapons (third category) are obviously the most dangerous.

          A letter warning against the coming race of these weapons was signed in 2015 by over 1000 AI experts. An international campaign called ‘Campaign to Stop Killer Robots’ started working on a regular basis for this and related objectives. Elon Musk  stated that competition for AI superiority at national level can be the “most likely cause of World War 3.”

          Stephen Hawking, Elon Musk and many other experts said in a joint statement that, handled badly, AI as weapon could be an existential threat to the human race.

          Paul Scharre, an expert on autonomous weapons, wrote that “collectively, swarms of robotic systems have the potential for even more dramatic, disruptive change to military operations.” One possibility he mentioned is that tiny 3D-printed drones can be formed into smart clouds that can permeate a building or be air-dropped over a wide area to look for hidden enemy forces.

          In my novel A Day in 2071 I visualize such a situation in which powerful elites use such a force of very tiny robot soldiers to suppress a revolt of common people.

          Several countries are surging ahead with rapid advances in robot weapons. In 2014 the Pentagon announced its ‘Third Offset Strategy’ with its special emphasis on robotics, autonomous systems and ‘big data’. This is supposed to help the USA to maintain its military superiority. In July 2017 China presented its “Next-Generation Artificial-Intelligence Development Plan”, which gives a crucial role to AI as the transformative technology in civil as well as military areas, with emphasis on ‘military-civil fusion’.

          The campaign called Stop Killer Robots wants a legally binding international treaty banning LAWs. But there are certain questions whether this can be effective without the big military powers signing it and these big powers are going ahead with big investments in robot weapons. Certainly whatever efforts that are being made at present to check robot weapons should continue and should be strengthened but beyond this it is also important to take a very serious look at why our world, the way it is organized at present, is increasingly found to be incapable of checking some of the most dangerous threats.

While very serious life-threatening conditions have existed at the planetary level for several decades due to the accumulation of nuclear weapons, a number of emerging technologies are aggravating this danger in several serious and complex ways. The Arms Control Association and author Michael T. Clare have made a very important contribution to the understanding of this grave danger in the form of their very timely report titled ‘Assessing the Dangers—Emerging Military Technologies and Nuclear (In) Stability’.

This report says, “Increasingly in recent years, the major powers have sought to exploit advanced technologies— artificial intelligence (AI), autonomy, cyber, and hypersonic, among others—for military purposes, with potentially far-ranging, dangerous consequences. Similar to what occurred when chemical and nuclear technologies were first applied to warfare, many analysts believe that the military utilization of AI and other such “emerging technologies” will revolutionize warfare, making obsolete the weapons and the strategies of the past. In accordance with this outlook, the U.S. Department of Defense is allocating ever increasing sums to research on these technologies and their application to military use, as are the militaries of the other major powers. But even as the U.S. military and those of other countries accelerate the exploitation of new technologies for military use, many analysts have cautioned against proceeding with such haste until more is known about the inadvertent and hazardous consequences of doing so. Analysts worry, for example, that AI-enabled systems may fail in unpredictable ways, causing unintended human slaughter or uncontrolled escalation.”

More specifically this report warns, “Of particular concern to arms control analysts is the potential impact of emerging technologies on “strategic stability,” or a condition in which nuclear armed states eschew the first use of nuclear weapons in a crisis. The introduction of weapons employing AI and other emerging technologies could endanger strategic stability by blurring the distinction between conventional and nuclear attack, leading to the premature use of nuclear weapons.”

On the positive side, this report informs us that arms control advocates and citizen activists in many countries have sought to slow the weaponization of AI and other emerging technologies or to impose limits of various sorts on their battlefield employment. To give an example, state parties to the Convention on Certain Conventional Weapons (CCW) have considered proposals to ban the development and the deployment of lethal autonomous weapons systems—or “killer robots,” as they are termed by critics.

Providing more details of these trends, this report tells us that among the most prominent applications of emerging technologies to military use is the widespread introduction of autonomous weapons systems— devices that combine AI software with combat platforms of various sorts (ships, tanks, planes, and so on) to identify, track, and attack enemy targets on their own.

At present, each branch of the U.S. military, and the forces of the other major powers, are developing— and in some cases fielding—several families of autonomous combat systems, including unmanned aerial vehicles (UAVs), unmanned ground vehicles (UGVs), unmanned surface vessels (USVs), and unmanned undersea vessels (UUVs). Russian and Chinese forces are also developing and deploying unmanned systems with similar characteristics.

Coming to the problems created by this, the report says,” The development and the deployment of lethal autonomous weapons systems like these raise significant moral and legal challenges. To begin with, such devices are being empowered to employ lethal force against enemy targets, including human beings, without significant human oversight—moves that run counter to the widely-shared moral and religious principle that only humans can take the life of another human. Critics also contend that the weapons will never be able to abide by the laws of war and international humanitarian law, as spelled out in The Hague Conventions and the Geneva Convention. These statutes require that warring parties distinguish between combatants and non-combatants when conducting military operations and employ only as much force as required to achieve a specific military objective.”

In recognition of these dangers, a concerted effort has been undertaken under the aegis of the CCW to adopt an additional protocol prohibiting the deployment of lethal autonomous weapons systems.

Regarding hypersonic weapons this report tells us that hypersonic weapons are usually defined as missiles than can travel at more than five times the speed of sound (Mach 5) and fly at lower altitudes than intercontinental ballistic missiles (ICBMs), which also fly at hypersonic speeds. At present, the United States, China, Russia, and several other countries are engaged in the development and fielding of two types of hypersonic weapons (both of which may carry either nuclear or conventional warheads): hypersonic glide vehicles (HGVs), unpowered projectiles that “glide” along the Earth’s outer atmosphere after being released from a booster rocket; and hypersonic cruise missiles (HCMs), which are powered by high-speed air-breathing engines, called “scramjets. All three major powers have explored similar types of hypersonic missiles.

Regarding the dangers related to this, the report tells us,” Analysts worry, for example, that the use of hypersonic weapons early in a conventional engagement to subdue an adversary’s critical assets could be interpreted as the prelude to a nuclear first-strike, and so prompt the target state to launch its own nuclear munitions if unsure of its attacker’s intentions.” (very recent reports in April 2026 have also mentioned the possibilities of the USA making preparations for deploying The Dark Eagle hypersonic missile in Iran war, even though it is not entirely ready, one reason being that the deployment may help in increasing budget availability for this hypersonic missile—the unit cost per missile is reported to be around $ 41 million or so).

Coming to cyber-attack related threats this report tells us these range from cyber-espionage, or the theft of military secrets and technological data, to offensive actions intended to disable an enemy’s command, control, and communications (C3) systems, thereby degrading its ability to wage war successfully. Such operations might also be aimed at an adversary’s nuclear C3 (NC3) systems; in such a scenario, one side or the other—fearing that a nuclear exchange is imminent—could attempt to minimize its exposure to attack by disabling its adversary’s NC3 systems.

Analysts warn, this report says, that any cyber-attack on an adversary’s NC3 systems in the midst of a major crisis or conventional conflict could prove highly destabilizing. “Upon detecting interference in its critical command systems, the target state might well conclude that an adversary had launched a pre-emptive nuclear strike against it, and so might launch its own nuclear weapons rather than risk their loss to the other side.” The widespread integration of conventional with nuclear C3 compounds these dangers.

This report also tells us that the increased automation of battlefield decision making, especially given the likely integration of nuclear and conventional C3 systems, gives rise to numerous concerns. Many of these technologies are still in their infancy and prone to often unanticipated malfunctions. 

This important report concludes, “The drive to exploit emerging technologies for military use has accelerated at a much faster pace than efforts to assess the dangers they pose and to establish limits on their use. It is essential, then, to slow the pace of weaponizing these technologies, to carefully weigh the risks in doing so, and to adopt meaningful restraints on their military use.”

The following proposed action steps, derived from the toolbox developed by arms control advocates over many years of practice and experimentation, are suggested in this report to reduce risks..

• Awareness-Building: Efforts to educate policymakers and the general public about the risks posed by the unregulated military use of emerging technologies.

• Track 2 and Track 1.5 Diplomacy: Discussions among scientists, engineers, and arms control experts from the major powers to identify the risks posed by emerging technologies and possible strategies for their control. “Track 2 diplomacy” of this sort can be expanded at some point to include governmental experts (“Track 1.5 diplomacy”).

• Unilateral and Joint Initiatives: Steps taken by the major powers on their own or among groups of like-minded states to reduce the risks associated with emerging technologies in the absence of formal arms control agreements to this end.

• Strategic Stability Talks: Discussions among senior officials of China, Russia, and the United States on the risks to strategic stability posed by the weaponization of certain emerging technologies and on joint measures to diminish these risks. These can be accompanied by confidence-building measures (CBMs), intended to build trust in implementing and verifying formal agreements in this area.

• Bilateral and Multilateral Arrangements: Once the leaders of the major powers come to appreciate the escalatory risks posed by the weaponization of emerging technologies, it may be possible for them to reach accord on bilateral and multilateral arrangements intended to minimize these risks.

One hopes that the warnings and recommendations presented in this report get wide attention of peace activists as well as policy makers. As very high risks are increasing at a very fast pace, these issues must engage the increasing attention of the top leadership, the United Nations, the scientific community and the peace movement.

Subscribe to Our Newsletter

Get the latest CounterCurrents updates delivered straight to your inbox.

Bharat Dogra is Honorary Convener, Campaign to Save Earth Now. His recent books include Planet in Peril, Protecting Earth for Children, A Day in 2071 and Earth without Borders and Ma over Machine.



Source link