States have a duty to carry out legal review of new weapons, means, and methods of war. Legal review of artificial intelligence (AI) systems poses significant legal and practical challenges due to their technical and operational characteristics. In this post, we consider how insights gained from legal review of cyberweapons impact AI systems and AI-enabled weapons.
AI and cyber tools are similar and closely related. Both operate in the digital realm and are characterized as “war algorithms” when used for military purposes. Furthermore, AI can be used to control and deploy cyber weapons, while cyber weapons can be used to manipulate and counter AI systems.
This post discusses this correlation from a legal review perspective. First, you can delve into the legal standards related to the cyber realm to aid your decision. which one AI-enabled tools should be scrutinized and how temporal considerations can inform the evolving legal review of cyberweapons when We should start reviewing AI tools for learning.
Additionally, this post examines how the substantive rules of international law relevant to the review of cyberweapons, such as targeting laws and the ban on indiscriminate weapons, provide guidance in assessing the legality of AI systems. Finally, from a practical angle, we discuss how assessment frameworks and toolkits in the cyber domain can support and inform review practices for AI-enabled systems.
Legal basis and scope
International law applies to both cyberspace and the development, deployment, and use of military applications of AI. Under treaty law, Article 36 of Additional Protocol (AP) I to the Geneva Conventions requires States to assess whether the use of new weapons, means, or methods of war violates international law.
As AP I has not been universally ratified, the question of whether the obligation to conduct legal review amounts to customary international law or is supported by other international law grounds remains a matter of debate. Although scholars disagree as to whether this rule was embodied in common law, there is evidence that a more restrictive regime exists under common law.
States that are not parties to AP I may also participate in legal review as a matter of domestic policy (see U.S. Department of Defense (DoD) Directive 2311.01). Such steps can help predict risks and identify shortcomings. Overall, contemporary state practice shows a positive trend towards conducting legal review of cyber weapons. of Tallinn Manual 2.0 Concerning the obligation of States to ensure that cyberweapons comply with the laws of armed conflict under Regulation 110.
In the cyber field, experts have proposed so-called software reviews and operational legal reviews to explain the lack of clarity regarding the definition of cyber weapons, the threshold for triggering armed conflict, and the obligatory nature of Article 36 of AP I for States Parties. This kind of approach can be extended to reviews of AI systems to address similar challenges.
Independently of these arguments, if an AI application is classified as a weapon, instrument of war, or method of war, current approaches to cyberweapons may help determine whether the system is subject to review. Cyber weapons that have the potential to cause harm or destruction fall within the scope of weapons review. Cyber tools intended for use in situations below the threshold of armed conflict are not included in such reviews. If software that was not originally developed for military purposes is acquired for use in a conflict, it must be subject to legal review.
Temporary considerations
Each country will decide the appropriate time to begin the review process, but reviews should begin as early as possible.
Similar to cyber weapons, if an AI system that requires examination is produced domestically, it must be examined at the conception, research, research, design, development, and testing stages. If the system is acquired, adopted, or procured externally, a review will be performed during offer consideration. Even if the software has already been subject to legal review by the country of provider, this does not exempt it from the obligations of the country of acquisition.
Once trained and deployed, AI systems can adapt and evolve. The reality of cyber weapons and “cyber speed” already requires dynamic adaptation. Cyber tools are typically designed and tailored to specific operations and targets, and may require frequent changes. Given the continuously changing cyber environment, even during active hostilities, iterative reviews may be necessary. of Tallinn Manual 2.0 This indicates that a “major change” will trigger a new legal review, but a “minor change” that has no operational impact will not trigger a review. Defining boundaries remains difficult in practice, but this can be used as a standard for AI systems.
The timing of the review may affect the designation of the competent authority. The Department of Defense typically considers conventional weapons. The reality of cyberattacks tends to be less formal, and consideration by military lawyers advising commanders on specific operations may suffice. Germany, for example, provides for a legal review of cyber measures in parallel with operational planning, and such reviews are integrated with precautionary obligations. This is a useful model for AI systems.
Legal considerations
The legality of a weapon is independent of its novelty or common use by a state ( Department of Defense Laws of War Manual). What matters is whether its use could violate international law in some or all circumstances. Although states do not need to foresee all possible misuses of weapons, including cyberweapons, they should pay greater attention to AI systems with learning capabilities, as the outcome of the learning process can be unpredictable. Currently, neither cyberweapons nor AI systems are prohibited by treaty or common law. Their legality is determined by applicable international law rules. In other words, legal reviews broadly address compliance with international law.
From the perspective of the law of armed conflict, a legal review must first assess whether cyber or AI-enabled weapons cannot be directed against military targets (or have limited effectiveness). In addition, states must comply with targeting rules, in particular those on differentiation, proportionality, and practicable precautions. Traditionally applied by commanders and operators during specific operations, such rules will need to be integrated into legal reviews when systems autonomously perform target law assessments based on AI.
Cyber weapons use cases can help evaluate AI use cases. In the context of potentially AI-guided cyber weapons, it is noteworthy that cyber tools designed to target website users, regardless of combatant status, are considered indiscriminate. They should also be prohibited if such tools have the potential to cause widespread, long-term, and significant damage to the environment. Furthermore, cyber devices that cause harm after activation due to previously innocuous actions may be considered “booby-trapped” and therefore subject to their respective restrictive legal frameworks. Similar considerations apply to cyber and AI tools designed to modify or control restricted or unacceptable weapons.
There is Jus ad bellum Also considerations. Although Articles 2, 4 and 51 of the UN Charter do not refer to specific weapons, the rules of self-defense, necessity and proportionality apply to the use of cyber weapons and autonomous capabilities embedded in AI decision support systems. Controversies and gray areas around the occurrence of “attacks”, especially in the digital realm, can make relevant reviews complex or inconclusive. Compliance with human rights obligations could further guide the legal evaluation of AI systems, but no clear practice has emerged so far in the cyber realm.
Practical considerations
Legal reviews include legal, military, and technical aspects. Testing and empirical evidence can contribute to legal evaluation. This could include the use of military “cyber ranges” or similar AI laboratories to support training and education, and promote respect for targeting laws and responsible behavior. However, the reproduction of reality-reproducing simulations remains particularly complex in both the cyber systems domain and the AI systems domain.
In the cyber field, a structured inspection framework that includes a uniform method for assessing specific functionalities and operational capabilities of software has been proposed to promote clarity and objectivity regarding the capabilities of cyber weapons. These include design features, technical and performance characteristics.
Similarly, modern toolkits provide guidance to practitioners through systematic access to information. These may include summaries of contemporary cyber incidents to capture lessons learned, and hypothetical deployment scenarios that highlight key legal touchpoints, such as whether the use of a tool or system constitutes an “attack” and requires review. Additionally, mapping current state practices can inform policymakers about successful approaches to legal review (see, for example, the Cyber Law Toolkit).
Going forward, while national efforts to improve the legal review of cyber weapons can already integrate important elements in the review of AI applications, new approaches to the legal review of autonomous weapons and interactions between countries can inform policy, procedural frameworks, and decision-making on the practical aspects of the legal review of cyber weapons (see Asia-Pacific Institute for Law and Security (APILS), Third Expert Conference Report, APILS Legal Review Portal).
conclusion
While AI systems and AI-enabled weapons pose new challenges to the legal review of weapons, law and practice regarding cyber weapons and tools could advance current thinking on this issue. While the mutual convergence between the cyber and AI realms may be inevitable, it is already instructive to consider the legal considerations of AI. This could lead to new practices, consistency, and clarity regarding the legal review of warfare algorithms.
***
Dr. Tobias Vestner is Head of Research and Policy Advice and Head of the Security and Law Program at the Geneva Center for Security Policy (GCSP).
Nicolò Borgesano is an Associate Strategic Program Officer at ITU and a former Associate Project Officer at GCSP.
The views expressed are those of the authors and do not necessarily reflect the official views of the U.S. Military Academy, Department of the Army, or Department of Defense.
war items is a forum for experts to share opinions and foster ideas. war items We do not screen articles to fit any particular editorial policy or endorse or defend published content. Author name does not indicate any affiliation with war itemsthe Lieber Institute, or the United States Military Academy at West Point.
Photo credit: U.S. Air Force Airman 1st Class Jared Lovett
