WASHINGTON (Reuters) – Billionaire Elon Musk said on Monday that the Chinese government would seek to introduce artificial intelligence regulations in the country after meeting with officials during a recent visit to the country.
Musk didn’t elaborate further, making the remarks in a Twitter space with Democratic presidential candidate Robert F. Kennedy Jr. on Monday.
Musk, who owns Twitter and Tesla, said: “It’s worth noting that during my recent visit to China, I met with senior Chinese leadership about the risks of artificial intelligence and the need for some form of oversight and regulation. I think we had a very productive discussion,” he said. CEO of Inc (TSLA.O).
“And my understanding from those conversations is that China will start regulating AI in China.”
Reuters could not reach Chinese officials for comment outside of normal business hours.
Musk left Shanghai on Thursday, ending a two-day trip to China where he met with senior Chinese officials, including the highest-ranking vice premier.
Musk met with Chinese foreign affairs, trade and industry ministers in Beijing. He also met with Chinese Vice Premier Ding Xuexiang on Wednesday, according to sources familiar with the matter.
In April, China’s cyberspace regulator released draft measures on managing generative artificial intelligence services, saying it would require companies to submit security assessments to authorities before launching services to the public.
Several governments are looking at ways to mitigate the risks of an emerging technology that has seen a boom in investment and consumer popularity in recent months following the release of OpenAI’s ChatGPT.
In April, the China Cyberspace Administration (CAC) said that China supports the innovation and application of AI and encourages the use of safe and reliable software, tools and data resources, but content generated by generative AI It must be in line with China’s core socialist values, he said. .
Providers are responsible for the integrity of the data used to train their generative AI products, and should take steps to prevent discrimination when designing algorithms and training data.
Reported by Kanishka Singh of Washington.Editing: Leslie Adler and Lisa Shoemaker
Our standards: Thomson Reuters Trust Principles.