In the West, AI technologies are primarily developed by private technology companies providing services to businesses and individuals with the goal of achieving market dominance.
But that's not necessarily the case in China. “The Chinese government is really working on AI development,” Jia said.
Chinese tech giants like Baidu and Tencent are also seeking market dominance, but the Chinese government is far more involved in the actual development of AI technology than its Western counterparts.
According to Jia, the Chinese authorities' intervention in the development level is primarily driven by Beijing's desire to use such technologies for mass surveillance and improve domestic technological expertise to reduce reliance on Western technologies.
“China's goals are very different. [from the West’s] “One is to succeed in the market, the other is to use ways to strengthen control and demonstrate strengths to demonstrate superiority. Naturally, you can imagine that their concerns would be different,” Jia added.
The conflicting views and concerns of China and the West were highlighted at a symposium in Beijing in June, where Zhang Linghan, a professor at the Institute of Data Law at the China University of Political Science and Law, noted that each country has “different perspectives, history and legal frameworks.”
While countries can learn from each other's regulations, the difference between the European Union and China is [be] “Some of the risks identified in Europe were actually acceptable and legal in China,” he added.
“it is [due to] “This is a cultural difference and a context difference,” said Chan, who sits on the UN's high-level advisory body on AI.
At the symposium, European diplomats provided details on the EU Artificial Intelligence Act (AI Act) adopted in May and the EU's priorities in regulating AI.
Marjuto Hannonen, head of the European Union trade delegation in Beijing, said Europe's “most important” concern was to ensure the safety and fundamental rights of European citizens were protected.
The law bans applications deemed too dangerous, such as those that manipulate people's free will or are used for social surveillance. “We will not allow such applications,” Hannonen said.
Yu Chuanman, director of the Centre for Regulation and Global Governance under the Institute of International Studies at the Chinese University of Hong Kong's Shenzhen campus, highlighted the role of “cultural differences” between China and the West, particularly in terms of human rights.
Matt Sheehan, a fellow at the Carnegie Endowment for International Peace who studies global technology issues, said the Chinese government's biggest concern about AI is how it will affect online content and information.
According to him, the earliest binding regulations focused on the role of emerging technologies in content creation and distribution, including both recommendation algorithms and AI-generated content.
“China says these regulations are aimed at creating a healthy online environment, but US and EU experts would simply call it censorship,” he said.
Jia, of the University of Southern California's Marshall School of Business, said Western governments are generally concerned about issues such as privacy, transparency, bias, fairness and accountability, but these concerns stem from individual users and activists of the technology.
“In China, the Chinese government's goal is likely to be effective surveillance, and there is no activist community to 'wiretap' you,” she said.
“Many of the issues that Western regulators are concerned about are irrelevant in China because there is no underlying force to resolve them.”
Weifeng Zhong, an affiliated research fellow at George Mason University's Marketas Center, said the discrepancy represents a “fundamental difference in values” between China and most Western liberal democracies.
In the West, civil rights take priority and our use of AI, and technology more broadly, must respect that.
By contrast, under Chinese rule, technology was supposed to serve the “larger collective interest,” but “what is good for society is often decided by the regime,” Zhong said.
“That's why AI-enabled surveillance in China can so easily be used for repressive purposes under the guise of promoting safety and order. This divide has been present since the dawn of the internet, but it's only getting wider in the age of AI.”
Jia said that China's and the West's competing approaches, particularly to the domestic governance of AI, reflect a “fundamental ideological gap” that is only becoming larger in a world rife with geopolitical conflicts, and is also a potential obstacle that could slow global progress toward unified AI regulation.
“actual, [countries] “Agreement on regulatory issues and governance of AI is easily achievable,” he said, adding that a lack of trust between China and the West could make agreeing on global rules more difficult.
“AI is a facade for geopolitical tensions. It's not just technology [issue]It is deeply intertwined with politics.”
Zhong said the current debate over AI risks reflects broader differences in views on human rights and freedoms that China and the West are still debating, adding that he was not optimistic the two sides would be able to resolve their differences anytime soon.
So what would global governance of AI look like? It could follow a similar path to the internet, he suggested.
“The Chinese government has a very different view than Western countries about information and how freely it should flow within the country and across borders. As a result of that fragmentation, the world wide web is now quite fragmented,” he said.
“There was a period in the early days of China's economic reform when it looked like China was going to become a more open society, but that ship appears to have sailed.”
Over the past year, China has signaled its ambitions to play a greater role in setting global rules and standards on AI and has called for greater cooperation with other countries on emerging technologies.
At the China-Africa Internet Development and Cooperation Forum held in April, the two sides recognized the need to step up cooperation on AI, and called for strengthening research, development and application of the technology, as well as enhancing dialogue.
Earlier this month, China proposed its own framework, the Global AI Governance Initiative, which would give all countries equal rights in AI development and call for joint efforts to combat terrorist misuse of the technology.
The non-binding resolution was co-sponsored by more than 140 countries, including the United States.
Yu, from the Chinese University of Hong Kong's Shenzhen campus, said the resolution was an achievement and a “small step” for countries to move forward.
“This is also how global governance achieves its objectives. We start with the areas where we fight and disagree. We build on the foundations and try to find other points of agreement along the way,” he said.
However, other aspects of AI, such as military applications, may become increasingly difficult for countries to agree on, given that AI is now considered a national priority by many governments.
“This is one of the most important technologies in the geopolitical struggle of the 21st century,” Yu said.
And yet, despite the deep-rooted differences between Chinese and Western societies, there may be areas of common interest where cooperation can take place, even if only limited.
Apart from issues such as access to AI in developing countries, which were included in a recent UN resolution, it was suggested that countries could consider mutually addressing issues surrounding the energy resources needed to sustain future AI innovation.
Zhong proposed cooperation to mitigate the existential risks that advanced AI could pose to humanity, adding that such threats should be something that even countries with vastly different values can support.
Sheehan, of the Carnegie Endowment for International Peace, suggested that the “only hope” for global governance of AI is to focus on a “very narrow set of problems” that countries have an interest in solving.
One example is the proliferation of powerful AI systems to non-state actors around the world, such as those used to accelerate hacking capabilities.
“Both the United States and China will use AI systems to hack each other, but neither country wants those systems to fall into the hands of terrorists or criminal organizations,” he said.
Jeffrey Ding, an assistant professor of political science at George Washington University, added that while national regulatory approaches will vary, there is still potential for proactive global governance on AI safety issues.
“Even at the height of the Cold War, the United States continued to cooperate with the Soviet Union on nuclear safety and security issues because it was in the national security interest of all nations to avoid accidental or unauthorized nuclear detonations,” he said.
“Similarly, there is ample room for international cooperation and coordination when it comes to the control of powerful AI systems.”
Jia said it would be a challenge for China to reach agreement with Western countries on some issues, including those that could endanger Beijing's grip on power, but that dialogue should still take place.
“Without conversation, there is zero chance of finding common ground, even on the smallest of things,” she said, sounding a warning.
“Dialogue is necessary, but the hope of a global AI is remote. [framework] The bar shouldn't be too high for everyone to use.”