Symposium on Military AI and the Law of Armed Conflict: Navigating the Governance of Dual-Use Artificial Intelligence Technologies in Times of Geopolitical Rivalries

Applications of AI

[Dr Guangyu Qiao-Franco is an Assistant Professor of International Relations at Radboud University and a Senior Researcher of the ERC funded AutoNorms Project at the University of Southern Denmark. ]

[Mahmoud Javadi serves as an AI Governance Researcher at Erasmus University Rotterdam in The Netherlands. In this capacity, he plays a role in an EU-funded research consortium titled ‘Reignite Multilateralism via Technology’ (REMIT).]

In the early 2010s, growing moral, political, and legal concerns over artificial intelligence (AI)-enabled autonomous use of force led to intergovernmental expert negotiations on arms control under the ambit of the United Nations Convention on Certain Conventional Weapons (UNCCW). Progress within the CCW has been sluggish, with states parties caught up in divisions over issues such as the sufficiency of the existing legal framework to regulate autonomous weapons, the permissible forms of AI use in force, and measures to ensure human control. In recent years, frustrations over the little progress made within the CCW against a rapidly shifting technology landscape have prompted regulatory attempts at other United Nations (UN) forums, such as the Security Council and the General Assembly, as well as various multilateral and regional arenas and frameworks. For the latter, the Summit on Responsible Artificial Intelligence in the Military Domain (REAIM) and the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy stand as prominent normative frameworks concerning military AI. However, despite these initiatives, the exact contours of military AI regulation have not been fully defined.

The path toward achieving a comprehensive agreement on the military application of AI is destined to be fraught with challenges due to the dual-use nature of this technology and exacerbated by escalating geopolitical rivalries. A potentially dangerous vicious circle is emerging in military AI governance at this moment: security concerns regarding the integration of advanced AI technologies in military capabilities and arsenals have prompted expansive access restriction measures permeating through the civilian domain. For example, the US has tightened export controls on semiconductors to China, a move supported by Japan and the Netherlands. Consequently, emerging powers have adopted a more rigid, if not uncooperative, stance in military AI negotiations and dialogues due to fear of broader control measures imposed in the name of national security. For one, China’s surprising abstention on the UN General Assembly resolution concerning the dangers of lethal autonomous weapon systems (LAWS) in November 2023, contradicts its earlier support for a legal ban of LAWS expressed at the UNCCW. This situation fosters distrust and tensions between states, undermining confidence-building and coordination efforts. It also increases the likelihood of more extreme responses that could trigger direct or indirect inter-state escalation and open conflict.

In an age defined by geopolitical rivalries, the weaponization of export control of AI goods and services may inadvertently erode the willingness of rival nations to engage in cooperative efforts regarding military AI governance. To steer clear of this downward trajectory, the arms control regime needs a stepwise paradigm shift concerning dual-use AI. Central to this shift are measures that prioritize inclusivity, transparency, and confidence-building. Ultimately, fostering this new paradigm for arms control can mitigate the need for rival states to expedite their civil-military technology transfer and pave the way for the ‘global AI order’ outside the geopolitical rivalries.

Intricacies and Challenges of Arms Controls for Dual-Use AI

AI is an intangible technology, unlike other tangible and recognizable technologies, making traditional restrictive measures less, if not entirely, applicable and effective. Three main reasons justify this challenge. First, the intangible nature of AI software enables effortless cross-border transfer, circumventing monitoring by enforcement agencies. Unlike physical goods, AI algorithms can be transmitted digitally across borders with little to no physical trace, making it difficult for authorities to track and regulate their movement effectively. 

Secondly, the verification of AI capabilities is complex due to the extensive lines of code involved, rendering it challenging for enforcement agencies to assess. Unlike conventional technologies where physical characteristics can be examined, AI systems often consist of intricate algorithms with millions of lines of code, making it daunting to verify their functionalities, especially when those functionalities could have both benign and harmful applications. 

In addition to the monitoring and verification challenges, AI is increasingly provided as a service rather than a standalone product, complicating export controls and oversight of its use across multiple countries. With the rise of cloud computing and Software-as-a-Service (SaaS) models, AI capabilities can be accessed remotely, blurring the lines of jurisdiction and making it challenging for regulators to enforce compliance with arms control and usage restrictions.

The dual-use nature of AI introduces another layer of hurdles in classification and regulation. Unlike other revolutionary technologies, whose progress relies heavily on government investments, AI technologies are propelled forward by private actors, ranging from technologists and entrepreneurs to corporations. Restrictive measures will likely pose significant risks to global commerce and can provoke dissent among private sectors reliant on overseas markets. 

States, predominantly from the Global North, have developed national frameworks and transnational regimes such as the Wassenaar Arrangement and the Australia Group, to name but two, to maintain control lists for dual-use items. The composition of these lists, however, remains subjective and politically driven due to the lack of international consensus on their definitions and constituents. Given the absence of established criteria for controlling dual-use AI within extant transnational regimes, the very Global North states have begun asserting their authority by implementing restrictions on access to AI technologies, their components, and applications. The management of AI items on these lists is highly difficult given the prevalence of general-purpose AI software. Over expansive access control to restrict the export or use of AI technologies deemed to have dual-use applications risk stifling innovation, hampering economic growth, and unnecessarily fueling geopolitical rivalries.

The Return of Geopolitics

“The return of geopolitics” resurfaced as a ubiquitous term following Russia’s annexation of Crimea in February 2014. However, this was not the sole catalyst. Since then, numerous other events and developments have underscored the ramifications of this resurgence, with ‘tensions’ and ‘competitions’ as defining characteristics of 21st-century geopolitics.

The competition between the United States and People’s Republic of China, often termed an overblown ‘systemic rivalry’ in Europe, is widely regarded as the primary order-defining issue. In 2022, EU High Representative Joseph Borrell plainly encapsulates this sentiment: “The world is being structured around this competition – like it or not. The two big powers – big, big, big, very big – are competing, and this competition will restructure the world.” 

AI, among other cutting-edge technologies that are identified as national power multipliers, becomes a key focus in the revived geopolitical competition. It is challenging to find any leader who would disagree with Russian President Putin’s assertion in 2017: “The one who becomes the leader in this [AI] sphere will be the ruler of the world.”

In a world grappling with geopolitical rivalries, the prevailing perception of AI, as echoed by Putin, has left few options for both established and emerging great powers. The former, notably the United States, strive to maintain its qualitative edge in AI, often resorting to excessive monopolization of technology and securitization of access to prevent the diffusion of technology from the civilian to the military sector. This includes measures such as export controls, foreign investment reviews, and suspensions of research and development (R&D) partnerships. Examples include US de-risking policies, NATO’s Defense Innovation Accelerator for the North Atlantic (DIANA), and the Action Plan on Synergies between Civil, Defense, and Space Industries.

The need for access control contradicts the quest for more rapid advancements on the part of rising powers that are subject to these stringent measures. China, for instance, has persistently pursued the acquisition and advancement of AI technologies, leveraging them for various domestic and international agendas. The recent remarks delivered by Chinese Prime Minister Li Qiang at the 2024 World Economic Forum encapsulate these divergent options: critiquing the limitations on technology access and innovation while advocating for cooperation on technology with more open measures. 

The restrictions of access to AI technologies even those in the civilian domain have raised geopolitical tensions that reduce states sense of security, making the prospect for any meaningful progress in cooperative military AI governance bleak. We zoom in on the unfolding of the US’s chip war against China as an example. On October 7, 2022, President Biden of the United States signed an Executive Order that limits the US from exporting advanced AI chips and Chinese acquisitions of companies that could allow the country to build chips smaller than 14 nm, followed by another Executive Order in August 2023 aimed at establishing mechanisms to limit outbound investment in sectors such as semiconductor, quantum information, and AI, in China and other designated countries of concern. These measures quickly led to Beijing’s decision to impose licensing requirements on the export of rare-earth metals such as gallium and germanium, along with several compounds derived from them, crucial materials in semiconductor manufacturing. Moreover, following the restrictions by Washington and some of its allies, China has shifted its military-civil fusion-driven semiconductor investments policy aimed at enhancing state autonomy. They have facilitated support for less competitive enterprises, enabled the substitution of outdated foreign chips with domestically produced ones in crucial military equipment, and allowed military-focused research to progress without fear of foreign embargoes. In a likely response to Western restrictive measures, China, ranked as the world’s second-largest military spender, allocated an estimated €270 billion to its military in 2022, constituting 13 percent of the world’s total spending. This marks a significant 63 percent increase compared to 2013 and a 4.2 percent uptick from 2021.

While national measures, such as those adopted by Washington and Beijing – though not exclusive to these nations – aim at dual-use AI control/access, they only run the risk of inadvertently reinforcing protectionism and isolationism. This could worsen already negative trajectories in global geopolitics rather than effectively managing dual-use AI regulation.

The Way Forward

In the realm of AI and beyond, there is no shortage of innovative proposals, both modest and audacious, from academia and policy circles for managing dual-use technologies. Nearly all suggestions emphasize the vital importance of international consensus and collaboration in effectively regulating these technologies. This imperative is particularly relevant in the context of AI, given its revolutionary impact across various domains of human life. Both ‘AI export controls’ and ‘AI arms control’ mechanisms are crucial to prevent the malicious proliferation of AI. However, the exploitation and weaponization of AI technologies and use cases by states against one another – be it a peer or near-peer competitor – undermines the establishment of a global AI order, which is indispensable for amplifying the advantages and mitigating the risks of AI.

It is unequivocally clear that the United States and China, arguably the two most important actors in AI governance, must work together to jointly steer the course in introducing a new paradigm and eventually shaping such order. Given the complex and nuanced dynamics characterizing the relationship between the two nations in the context of geopolitical competitions, it becomes evident that the journey ahead may be fraught with challenges, particularly if the White House is once again led by masterminds of unilateralism. Nevertheless, the intricate web of geopolitical, economic, and technological interdependencies underscores the importance of fostering constructive engagement and collaboration between Washington and Beijing. To safeguard against the risks of political maneuvering and the potential derailing of Sino-American endeavors, it is imperative to explore alternative avenues for dialogue and cooperation as a prerequisite to the new paradigm. 

In this regard, Track II diplomacy facilitated by epistemic communities emerges as a promising approach, albeit just a first step. By leveraging the expertise and networks of non-governmental actors, such as technologists, scientists, and industry leaders, Track II diplomacy can serve as a conduit for fostering mutual understanding, trust-building, and informed discussions. Specifically focusing on the management of dual-use AI, the epistemic communities can play a noticeable role in facilitating nuanced debates, identifying common ground, and developing pragmatic solutions that balance the imperatives of national security, innovation, and healthy competitiveness. 

The involvement of epistemic communities can evolve to lay the groundwork for an inclusive and transparent dual-use AI control framework to be set up under a broader, multilateral setting. This framework must be open to states of all backgrounds and inclusive of different viewpoints, going beyond like-minded groups, to guarantee its effectiveness, fostering a global AI order free from weaponization and politicization, thereby ensuring its success. The ultimate goal is to mitigate geopolitical tensions and increase stability to fundamentally reduce the need to convert civilian technologies into military usage.

X/Twitter: @GykQiao 


X/Twitter: @MahmoudJavadi2


Print Friendly, PDF & Email

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *