China’s new rules on generative AI include security reviews

AI For Business


in a rush Baidu and Alibaba After announcing its AI products, China has been quick to propose regulations for its burgeoning generative AI industry.

According to the China Cyberspace Administration (CAC), AI products developed in China must undergo a “security assessment” before being released to the public. drafted new rules About the development of generative AI services. The goal is to ensure “sound development and standardized application” of generative AI technology, and the proposal is open to the public for comment, it said.

In addition to not promoting terrorism, discrimination, or violence, content generated by AI bots must “reflect core socialist values ​​and not involve subversion of state power.” The guidelines, released on April 11, said companies should ensure AI content is accurate and take steps to ensure models don’t generate false information.

Regarding AI model data collection, the data must not contain information that infringes intellectual property rights. If the data includes personal information, the company must obtain the consent of the subject of the personal information or meet other conditions required by law, CAC wrote.

The rule came about in recent weeks when a major Chinese tech company rushed to launch a generative AI product trained on large datasets to create new content.Baidu is Ernie bot testAI company SenseTime released an AI bot this week Sense Novae-commerce giant Alibaba introduces Tongyi Qianwen, plans to integrate AI bots across the product.

However, these bots are in test mode and are not yet live. The timeline for when those will be is not clear.as an analyst I got it According to Bloomberg, CAC rules could affect how China’s AI models are trained in the future.

The popularity of AI bots skyrocketed after San Francisco-based OpenAI launched ChatGPT just five months ago. AI chatbots have been used to draft emails and write essays, but there is growing concern that generative AI models spew out false and inaccurate information.

How will AI be regulated?

Countries around the world are trying to regulate the development of AI bots.Italy last week Temporarily ban ChatGPT, cites the processing of personal data and the tendency of bots to generate inaccurate information.Meanwhile, in the United States, the Department of Commerce formal publication requestasked for comment this week on whether AI models should undergo a certification process.

Companies like Google and Microsoft are quick to say their AI bots aren’t perfect, highlighting the ambiguous nature of generative AI. Some companies are embracing regulation. “We believe that powerful AI systems should undergo rigorous safety assessments,” says OpenAI on his website. read“Regulation is necessary to ensure that such practices are adopted, and we are actively engaging with governments on the best possible form of such regulation.”

China’s CAC has written that AI services will be suspended if companies do not comply with the guidelines. The company responsible for the technology could be fined at least 10,000 yuan ($1,450) and could even face criminal investigation.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *