Chinese Data Law Expert Says AIGC Tool Control Impossible So Far

AI Basics


(Yicai Global) April 12 — China’s Internet regulator seeks to control content models generated by artificial intelligence. But according to Chinese experts in data law, it’s not possible at the moment because the delivery process is intelligent and random.

Because language models concern the interests of individuals and the general public, establishing controls before AIGC models reach users is a daunting task and is the focus of China’s latest regulation, says East China’s Data Gao Fuping, director of the Law Research Center said. The University of Political Law told Yicai Global.

China’s Cyberspace Administration yesterday released draft AIGC service management measures to regulate security and ethical issues that can arise from products like ChatGPT. Watchdog is accepting comments until May 10th.

Wu Shenkuo, a professor of law at Beijing Normal University, told Yicai Global that the bill will be published quickly to keep up with the rapid pace of technological development and application trends, and will provide a mature, agile and efficient oversight of China’s digital environment. said to show

The measure presents requirements for AIGC service provider approval, algorithm design, training data selection, content models, user real names, personal privacy, and trade secrets.

China will continue to support independent innovation and international cooperation on AI algorithms and basic framework technologies, and encourage enterprises to prioritize the use of safe and reliable software, tools, computing and data resources. .

Wu noted that the draft includes three areas for overseeing AIGC in China. The first is to utilize the systems and mechanisms stipulated by laws and regulations. The second is to emphasize the fundamentals of risk prevention, response and management; Emphasize, Wu said.

According to the draft, AIGC providers will have to submit their products and services to China’s Internet Watchdog for security evaluation, registration, modification, or cancellation of their algorithms before making them available to the public.

The draft indicated that content produced by AIGC must be truthful and accurate, and steps must be taken to avoid false information. Institutions or individuals that use AIGC to provide text, image, and speech generation services are responsible for the content produced by the model.

Not all countries have adopted AI chatbots. Italy’s data protection agency announced a temporary ban on US startup OpenAI’s ChatGPT on March 31, making him the first country to restrict the service. Other European countries have since followed suit and begun to take regulatory action. The US government is also considering whether ChatGPT and other AI tools should be investigated.

Editor: Martin Kadiev



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *