September 15th, 2025
Beijing – Risks dominate the current debate on AI governance. In July this year, Nobel and Turing recipient Jeffrey Hinton attended the World Artificial Intelligence Conference in Shanghai. His speech included a title he used almost exclusively since leaving Google in 2023. He once again emphasized that AI could quickly outperform humanity and threaten our survival.
Scientists and policymakers from China, the US and European countries, nodded heavily accordingly. However, this obvious consensus hides a deep paradox in AI governance. After the meeting, the brightest mind in the world identified common risks. They ask for cooperation, sign the declaration, and then see the world return to fierce competition at the moment the panel ends.
This paradox has troubled me for years. I trust science, but if the threat is truly existential, why can't even survival unite humanity? I've got a grasp of some of the ridiculous possibilities these days. These risk warnings failed to promote international cooperation as defining AI risks itself becomes a new field of international competition.
Traditionally, technology governance follows a clear causal chain. Identify specific risks and develop governance solutions. Nuclear weapons pose an objective danger: explosion yield, radiation, radioactive fallout. Climate change provides measurable indicators and increasingly robust scientific consensus. In contrast, AI is a blank canvas. No one can be sure if the biggest risk is massive unemployment, algorithmic discrimination, close acquisitions, or something completely different that we have never heard of.
This uncertainty transforms AI risk assessments from scientific research into strategic gamemanship. The US emphasizes “existential risks” from the “frontier model,” the term that highlights Silicon Valley's advanced systems.
The framework places the American tech giant as both a source of danger and a key partner in control. Europe focuses on “ethics” and “reliable AI” and extends regulatory expertise from data protection to artificial intelligence. China argues that “AI security is a global public good,” and that risk governance should not be monopolized by a few countries, but should serve the common interests of humanity.
Corporate actors have proven equally proficient at shaping the story of risk. Openai's emphasis on “coordinating with human goals” highlights both the true technical challenges and the company's specific research strengths. Humanity promotes “Constitutional AI” in a domain that asserts specialist knowledge. Other companies are good at choosing safety benchmarks that support their approach, but suggests that competitors who do not meet these criteria are at a real risk. Computer scientists, philosophers, economists and experts shape their own values through stories, warnings of technical catastrophes, clear warnings of moral hazards, or forecasts of sudden changes in the labour market.
Therefore, the causal chain of AI safety is inverted. First we construct a risk narrative, then we speculate on technical threats. First you design a governance framework and then define the problems that require governance. Defining a problem leads to causality. This is not an epistemological failure, but rather a new form of force, or risk definition, into an unquestionable “scientific consensus.” As for how applications define “artificial general information” that constitutes “unacceptable risks” as “responsible AI,” answers to all these questions directly shape future technological trajectories, industrial competitive advantages, international market structures, and even the world order itself.
Does this mean that AI safety cooperation is destined to talk about the sky? It's exactly the opposite. Understanding the rules of the game will allow you to participate better.
AI risks are built. For policymakers, this means moving forward with your agenda in international negotiations, understanding the true concerns and legitimate interests behind others.
Regardless of how risk is defined, acknowledging structure does not mean denial of reality. Sturdy technical research, robust emergency mechanisms, and practical safety measures remain essential. For businesses, this means forming technical standards and considering multiple stakeholders when avoiding the take of winners.
The true competitive advantage comes from unique strengths rooted in the local innovation ecosystem rather than opportunistic positioning. Generally, this means learning to develop “risk immunity” and identify the structure of interest and power relationships behind different AI risk stories.
International cooperation remains essential, but its nature and potential must be rethinked. Rather than pursuing a unified AI risk governance framework, it is a consensus that is neither achievable nor necessary, and multiple risk perceptions must be acknowledged and managed. The international community needs a “competitive governance lab” where various governance models actually prove their value, rather than replacing all other comprehensive global contracts. This polycentric governance may seem loose, but higher order adjustments can be achieved through mutual learning, checks and balance.
We habitually view AI as another technology that requires governance, without realizing that it is changing the meaning of “governance” itself. The competition to define AI risk is not a failure in global governance, nor is it a necessary evolution. It is a collective learning process for facing the uncertainty of transformative technology.
The author is an associate professor at the Center for International Security Strategy at Twin Island University.
Opinions do not necessarily represent Chinese views every day.
