HONG KONG (Reuters) – China’s cyberspace regulator on Wednesday unveiled proposed measures to govern generative artificial intelligence (AI) services, requiring companies to conduct security assessments before making services available to the public. He said he wanted to submit.
A rule drafted by the China Cyberspace Administration (CAC) has prompted several governments to warn of the dangers of an emerging technology whose investment and consumer popularity have soared in recent months after the release of OpenAI’s ChatGPT. It arose while I was looking for ways to mitigate.
Also, many Chinese tech giants such as Baidu (9888.HK), SenseTime (0020.HK) and Alibaba (9988.HK) have launched new artificial intelligence models that can power various applications from chatbots in recent weeks. It continues to be shown over the years. to the image generator.
CAC states that China supports AI innovation and applications and encourages the use of safe and trustworthy software, tools and data resources, but content generated by generative AI remains a core socialist socialist activity in the country. “It has to be in line with our values,” he said.
Providers must take responsibility for the integrity of the data used to train their generative AI products and take steps to prevent discrimination when designing algorithms and training data, it said.
The regulator also says service providers must require users to provide their actual ID and related information.
Providers can be fined, have their service suspended, or even face criminal investigations if they fail to follow the rules.
If inappropriate content is generated by a platform, companies must update their technology within three months to prevent similar content from being generated again, CAC said.
According to the draft, the public will have until May 10 to comment on the proposal, and the measure is expected to go into effect later this year.
Reported by Josh Ye in newsrooms in Hong Kong and Beijing. Edited by Tom Hogue and Muralikumar Anantharaman
Our standards: Thomson Reuters Trust Principles.