Governing the invisible: AI, dual-use biology, and the illusion of control

Applications of AI


The most important security threats of the 21st century no longer make themselves known through troop movements or missile tests. They appear noiselessly in the form of algorithms, datasets, and source code that promise improvements, but contain the risk of damage. Such risks are right at the intersection of artificial intelligence and life sciences. Hailed as a breakthrough in medicine and global health, the plan also revealed one of the core failures in international governance. The world is grappling with biological threats, but information remains slow, scarce, and linked to nation-states.

The fundamental threat to this problem is the classic dual-use problem, which is growing in scale and velocity. With the advent of AI, biology has become a predictive science rather than an experimental one. Computer-driven analysis tasks that previously required years of laboratory experiments can now be performed to learn how proteins behave, how to model the evolution of pathogens, and find weak links in molecules. This transformation saved lives and spurred innovation. However, at the same time, it also brought down the curtain that restrained abuse. Features have become more convenient, mobile, and difficult to follow.

The United Nations organization is not turning a blind eye to biological dangers. The Biological Weapons Convention (BWC) remains at the heart of global efforts to avoid biological warfare. However, BWCs were designed in environments where threats are real, such as laboratories, stockpiles, and national programs. It is a ban on bringing about results, not a facilitator. Algorithms do not violate treaties. Open source models are not subject to inspection. Various proprietary research platforms are not within the scope of normal validation logic. As a result, behind the regulatory shadows lie AI-facilitated biological capabilities that are technically legal, strategically practical, and institutionally uncontrolled.

This is not just a legal omission, but an ideal. BWC considers that danger begins when intentions become malicious. AI counters this assumption by defining capabilities themselves as strategic variables. As the means to design or optimize biological systems become more readily available, intentions become more difficult to identify and easier to take action. Deterrence, which has always been based on attribution and retaliation, begins to weaken.

Other United Nations organizations are also trying to achieve a vacuum, albeit to a limited extent. UNESCO’s proposed ethical guidelines for AI focus on transparency, human control, and fairness. These are important aspects of social trust, but not of biosecurity. They see AI as a social risk rather than a strategic risk. Meanwhile, the World Health Organization is working to monitor, prepare and respond. These are essential measures, but they are immediate in nature. They try to predict the occurrence of harm and mitigate its impact, rather than how emerging technologies will transform the risk landscape on the stack.

One vivid example of slow governance can be seen in the international community’s response to advances in the field of computational biology. The success of an AI system in inaccurately predicting the structure of a protein was rightly declared a transformative achievement. Information and equipment were freely released and seamlessly integrated into research infrastructures around the world virtually overnight. It was clear that openness was progress. A question that was not often raised was whether exposure could be considered equivalent to openness without guardrails. There was no UN mechanism that required a shared assessment of whether such capabilities would have a significant impact on the biological threat environment. This was a case of celebration rather than vigilance, as no agency was tasked with practicing vigilance.

This trend points to a more fundamental structural problem. Although global governance is structured by sectors, risks currently take the form of intersections. Arms control agencies try to control weapons. AI regulators are pursuing discrimination and liability. Medical institutions are looking for outbreaks. Not all of these are dual-purpose AIs in biology at the same time. The result is a collapse, with many actors working on some variation of the problem, but no single platform is empowered to solve the entire problem.

But here too, behind this collapse lies an opportunity.

The opportunity lies in the acceptance of risk-based governance in the international arena. Risk-based approaches aim to understand the impact of specific capabilities on threat dynamics, rather than applying the same sensitivity to all AI or all biological research. If used properly, the international community will be able to distinguish between routine applications of AI in the biomedical field and high-impact tools that can go a long way in minimizing the obstacles of biological abuse. This type of differentiation is in line with the UN principle of proportionality and is likely to prevent false dilemmas between innovation and security.

Another opportunity is the formation of norms, where the influence of the United Nations is not taken into account as much as it should be. Norms against chemical and biological weapons defined what was unacceptable and thus determined how to act long before verification regimes came into play. Other norms may emerge in the life sciences sector, including the responsible use of AI. This includes discouraging the publication of step-by-step enablements, institutional review of potentially high-risk models, and introducing biosecurity into research culture. It’s not a question of legitimacy, it’s a question of urgency.

Another untapped potential is the possibility of changing our approach to the problem of collective resilience rather than control. AI governance in the context of global health security – pandemic preparedness, early warning, and capacity building – will help shift the discourse towards limits to collective protection. For most nations, especially in the Global South, this framework will ring much closer to the truth than abstract arguments about technological constraints.

But the most significant gap is an institutional one. There is no clear mandate for UN agencies to comprehensively regulate the intersection of AI, biology, and security. Accountability is unclear, coordination is ad hoc, and responsibility is dispersed. This is not necessarily due to an impending disaster, but to the fact that avoidable risks remain unaddressed, and will become more costly flaws in the near future as AI capabilities continue to change.

It is unlikely that biological risks from AI-enabled will be dramatically declared. It is built under the radar with eroded defenses and customized abilities. Once the threshold is crossed, it is now much more difficult to restore than to save. The international community can still change its structure, but it is imminent.

The main question is whether AI needs to be regulated, and there is no dispute that it is necessary. The question is whether global institutions can develop so rapidly that they can control what is possible with AI, but not what past generations feared. Even in a world where intelligence is itself a strategic asset, it is impossible to govern blindly over what cannot be seen.



Source link