Study claims China’s AI governance is not just top-down

AI News


LONDON — China watchers are promoting a “stereotypical narrative” by claiming that Beijing’s management of artificial intelligence relies on an authoritarian government, a new study has found.

Xuechen Chen, an associate professor of politics and international relations at Northeastern University in London, co-authored a paper examining how China’s traditional values ​​and commercial interests have played a role in self-regulatory guardrails for deployed AI.

This argument is set out in a peer-reviewed paper published by Computer Law & Security Review, “State, Society, Market: Interpreting the Norms and Dynamics of China’s AI Governance.”

In China, where dissent and anti-government views are heavily censored, the prevailing view that President Xi Jinping and the Chinese Communist Party are in control has led many Beijing observers to believe that the country’s technology oversight is top-down.

But Chen said it was a “stereotypical narrative” to claim that all safety measures are handed down by the state. She argued that such a view fails to recognize the influence that Chinese society and cultural norms have on tech giants such as TikTok owner ByteDance and chatbot rebel group DeepSeek.

“What we wanted to do was prove that China’s AI governance, and digital governance more broadly, is not what people imagine it to be, a top-down, state-driven system where the national government says it should do this and then just does it,” Chen said.

“In reality, that is not the case, because there is a wide range of different stakeholders in this whole governance process, including obviously the state but also the private sector and, more recently, I think more importantly, society.”

Mr. Chen explained that each of these elements – the state, the private sector, and society – are stakeholders in the governance debate. “They work together and co-create these norms and regulatory mechanisms,” she added.

According to a study by Tech Buzz China and Unique Research, 23 of the world’s 100 largest AI products by annual recurring revenue are manufactured by Chinese developers, the majority of which are focused on overseas markets. China’s four largest companies, Glority, Plaud, ByteDance, and Zuoyebang, generated a combined $447 million.

This revenue still lags far behind larger U.S. companies, with developers OpenAI and Anthropic generating estimated annual recurring revenue of about $17 billion and $7 billion, respectively.

Although China does not have a ratified AI law like the European Union’s AI law, it follows the US model rather than market-driven regulation, Chen said.

AI governance is led by the country’s internet regulator, the Cyberspace Administration of China.

Chinese hard-liners say this is part of the state’s censorship of the internet. In September, the Cyberspace Bureau launched a two-month campaign threatening “severe penalties” against social media apps, including popular microblogging site Weibo, that failed to suppress “negative” content about life in China.

Wired, a tech news outlet, reported that all AI companies in China must register with regulators and demonstrate that their products avoid risks ranging from psychological harm to “violating core socialist values.” Ut lacinia non augue sed elementum.

Chen’s paper, co-authored with Lancaster University’s Lu Xu, points out how China became the first country to introduce formal regulations specifically related to generative AI. The topic has been a topic of debate in the West in recent weeks following the uproar over Elon Musk’s AI, Grok, creating sexual deepfakes of women and children on social media platform X.

China’s generated AI is legally restricted from creating content that is considered illegal or indecent to “reflect the tastes and broader interests of modern Chinese society,” the paper said. “China has also developed perhaps one of the most effective and rigorous systems for the protection of minors in cyberspace, encapsulating games, short videos and GAI services,” it added.

Last year, the Communist government updated a sweeping minor protection law to include online restrictions. The amount of time minors can spend online is restricted, while smartphone manufacturers are forced to install kid-friendly modes.

Chen said that even before the law change, AI developers were considering self-regulating their platforms and “aggressively imposing rules” to avoid conflicts with regulators.

There are two reasons for this, she explained. First, you don’t want to violate the government’s strict censorship laws. For example, DeepSeek, China’s answer to OpenAI’s ChatGPT chatbot, does not respond to prompts criticizing the Xi Jinping government.

The second reason AI-using companies decided to self-regulate was market-driven, Chen continued. Confucian values ​​remain in Chinese culture, and the hierarchical nature of the family remains strong. This means that if parents discover their children consuming harmful or unwanted content on AI or other online platforms, they are likely to intervene.

“If ByteDance didn’t moderate content for kids, parents would be furious. And they would just say, ‘No, we’re not going to use your TikTok. It’s over,'” Chen said. “Technology companies don’t want to face a scenario like this where consumers aren’t satisfied.”

Chen acknowledged that there are broader questions about how much power other non-state actors have in authoritarian societies like China. But she said considering that was a matter for further research.

“What we wanted to demonstrate in this paper is that these various actors do indeed actively participate in shaping regulations, policies, guidelines, and standards on the ground,” she added.



Source link