In response to the growing threat of cybersecurity and cyberattacks, the Chinese government has launched a multi-month long-term enforcement operation to control or reduce the misuse of artificial intelligence AI.
China has launched a months-long campaign aimed at curbing the misuse of artificial intelligence across the country.
“Chinese cyberspace administrators will launch a four-month campaign against ‘fraudulent behavior in AI applications,'” according to a statement released Thursday.
The campaign is divided into two phases and will target weak security screening of AI models, “AI data poisoning,” failure to register AI models and inappropriate labeling of AI-generated content, according to the statement.
It will also target the misuse of AI-generated content, including false information, “violent and indecent” content, impersonation, and content that harms minors, with a focus on preventing activities such as AI-generated deepfakes, online fraud, misinformation, and other forms of content abuse that can spread rapidly using generation tools. Regulators are also expected to increase oversight of platforms and companies that develop or deploy AI systems.
Authorities will remove illegal and harmful content and punish online accounts and platforms that do not comply.
Recently, China announced it would suspend use of Anthropic’s AI application Manus amid growing concerns over cybersecurity.
It also comes as banks across Asia, particularly China’s largest banks, are increasing checks on artificial intelligence tools, amid growing concerns that the latest models will allow hackers to find weaknesses faster and launch broader cyberattacks.
This effort is part of a broader effort by authorities to increase oversight of rapidly advancing AI technologies and reduce harmful or illegal applications.

