Governments around the world looking to regulate generative AI

AI News

As generative AI tools grow in popularity, government agencies around the world are making their own mark on how to curb the enthusiasm of companies and vendors through regulation.

Most recently, China swiftly released its latest AI rules shortly after tech giant Alibaba officially unveiled its version of ChatGPT on April 11th.

Alibaba’s AI chatbot

Introduced in a demonstration at the 2023 Alibaba Cloud Summit, Tongyi Qianwen reportedly stands for “Truth from a Thousand Questions,” creating invitations, planning itineraries, and advising shoppers. can do. The Chinese cloud his vendor plans to integrate large language models into enterprise applications such as workplace messaging app DingTalk and voice assistant his Tmall Genie.

Users can start registering Tongyi Qianwen from April 14th.

Alibaba’s release comes after competitors SenseTime and Baidu recently launched AI bots. SenseTime launched his SenseChat and Baidu launched Ernie Bot.

These launches come as regulators around the world try to navigate the popularity of generative AI systems and large-scale language models.

China’s response

Hours after Tongyi Qianwen’s release, China’s Cyberspace Administration (CAC) proposed a new law that would require AI providers to submit their products for review before making them public. The CAC also pointed out that AI-generated content still needs to include socialist values.

Kashyap Kompella, an analyst and founder of RPA2AI Research, said the CAC’s proposed rules follow the Chinese government’s general approach to regulating websites, social media apps and services. Kompera said the CAC’s proposed regulations would allow providers of generative AI services to minimize harm to users, not to infringe copyrights, not to include inaccurate content, and to avoid sensitive content and the Chinese regime. Compera added that the aim is to avoid generating criticizing content.

“I don’t think anyone expected anything different,” he said. “AI chatbot regulations are on par with Chinese language courses.”

Attempts at global AI regulation

China is one of many countries taking steps towards regulating generative AI. On Tuesday, the U.S. Department of Commerce revealed that it will spend the next 60 days listening to public opinion on AI audits, risks, and how to make AI systems more comfortable for consumers.

Additionally, Italy banned ChatGPT last month, and Italy’s data protection authority ordered OpenAI to stop processing users’ data as it investigates violations of European privacy regulations. Regulators are also investigating whether ChatGPT violated GDPR laws. On April 12th, the Italian data protection authority gave OpenAI his April 30th deadline to comply with certain data guidelines before OpenAI could again operate in the country. I was.

Despite attempts to regulate generative AI, many government systems have fallen behind.

“Regulation is struggling to keep up with technology,” Compera said. Regulation is also not simple, and regulators need training to address data, data protection and privacy, intellectual property rights, inappropriate use of AI systems, and AI hallucinations, he added.

“Companies should also assess data security breaches while using such chatbots,” said Kompella.

“When it comes to regulating AI, especially generative AI, there are still a lot of open questions about what and how to regulate,” says Forrester Research analyst Rowan Curran. “The potential impact of regulation on both technology development and deployment will take time to resolve.”

But rather than outright banning specific systems, countries trying to understand generative AI can create a conversation about the appropriate use of generative AI models, says Sarah, director of the Cornell Tech Policy Institute at Cornell University. Mr Krebs said.

“It goes to the idea of ​​explainable AI, trying to understand models,” she said. “It’s a beneficial direction compared to a ban.”

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *