BEIJING, May 14 (Xinhua) — China is stepping up efforts to regulate and secure artificial intelligence (AI) agents in response to the increasing number of vulnerabilities associated with emerging open source technologies.
On May 8, the Cyberspace Administration of China (CAC), the National Development and Reform Commission, and the Ministry of Industry and Information Technology (MIIT) jointly issued Guidelines for the Standardized Application and Innovative Development of AI Agents, clearly emphasizing the principles of safety and controllability, standardization and orderliness regarding the development of AI agents.
In April, five central ministries, including the CAC, rolled out regulations on AI anthropomorphic interactive services, established a risk-based monitoring mechanism requiring the submission of security assessments and algorithms, and proposed the creation of an AI sandbox security service platform. The move marks the country’s first articulation of the AI sandbox governance concept.
Meanwhile, MIIT and other authorities have issued guidelines to standardize technology ethics reviews, calling for AI models to remain robust, controllable, transparent and accountable. The authorities are also accelerating the development of a national AI security standards system to set clear ground rules for the healthy growth of the industry.
According to the China Information Security National Vulnerability Database (CNNVD), 111 vulnerabilities related to OpenClaw were recorded from April 14 to April 28 alone. These flaws range from access control errors to serious code issues.
Previously, the China National Computer Network Emergency Response Technology Team/Coordination Center (CNCERT/CC) and MIIT had issued a series of high-level warnings regarding vulnerabilities related to OpenClaw. Additionally, the National Computer Virus Emergency Response Center has detected numerous counterfeit OpenClaw skill packages embedded with Trojan horse viruses, posing serious risks to users’ data security and system stability.
The security challenges posed by AI agents are increasingly recognized as a global concern. The Open Web Application Security Project (OWASP) Foundation cited agent goal hijacking and tool abuse as key threats in a recent report.
“OpenClaw-type agents are likely to become the next generation of operating systems,” said Tian Suning, co-founder of AsiaInfo, a leading Chinese cybersecurity technology company. He noted that as companies’ core assets shift from traditional people and software to data and agents, ownership and security of these digital entities has become a key issue.
Chinese high-tech companies are rapidly developing a variety of defense systems to mitigate these risks. Liu Longwei, CSO of Tuya Smart, a leading provider of AI cloud platform services, revealed that the company is equipping its entire workforce with a “digital workforce” based on a modified version of OpenClaw, noting that 70% of the company’s code was generated by AI last year. Nevertheless, he acknowledged that there are additional security pressures. The company responded by building six layers of defense, including system hardening and supply chain security.
“Allowing employees to run unregulated OpenClaw in the workplace is dangerous because it undermines security and control over data leak threats,” said Liang Hongwei, senior technology expert at Alibaba Cloud. He recommended flexible cloud deployments and strict adherence to operational principles that prioritize security and compliance to prevent data breaches.
Domestic security vendors are also leveraging their technical expertise to enhance security protection for AI agents. AsiaInfo’s cybersecurity division has introduced the Agent Trust Framework (ATF), a governance model that integrates the concepts of “agent intent alignment” and “human-AI co-governance.” This approach aims to contain the risks arising from the randomness of AI and ensure that unlocking AI productivity remains within compliance boundaries. ■
