AI rules take center stage as ChatGPT concerns grow

AI News


Concerns about AI and tools such as ChatGPT, OpenAI’s large-scale language model, are turning to action at the national level as governments grapple with regulations and policies for the technology.

The Italian government banned ChatGPT earlier this month after voicing data privacy concerns, but said it would temporarily lift the ban if OpenAI complied with a series of demands. The French government is now evaluating the tool, and the European Data Protection Commission has created a task force focused on his ChatGPT and AI privacy rules.

In the US, the White House wants more information on AI-related risks. On Tuesday, the Department of Commerce’s National Telecommunications and Information Administration launched a solicitation for comments on policies to ensure AI accountability.

So far, the US has yet to move forward with rules and regulations on AI. Instead, last year the White House released a blueprint for an AI Bill of Rights to guide companies on ethical AI implementations. The Commerce Department study informs the Biden administration’s approach to AI risks.

“Responsible AI systems can bring enormous benefits, but their potential consequences and harms must be addressed,” said U.S. Assistant Secretary of Commerce Alan Davidson in a statement. informs policies to support AI audits, risk and safety assessments, certifications, and other tools that can create trust earned in AI systems.”

Be careful with shiny objects.

Nader HeninAnalyst, Gartner

Italy’s ChatGPT ban is likely to heighten concerns about AI and data privacy that many governments are already concerned about, said Gartner analyst Nader Henein. As companies increasingly use tools such as his ChatGPT, it will be important for CIOs and other business leaders to keep an eye on regulatory changes, Henein said.

“They shouldn’t jump in with both feet to say, ‘There’s a generative AI chatbot on the platform that can help you do this,’” Henein said. “Watch out for shiny things.”

Italy’s ban warns

Henein said Italy’s ban is not about ChatGPT’s technology, but rather because OpenAI is not compliant with the EU’s GDPR. ChatGPT’s surge in adoption may have overwhelmed the company and put it in compliance hot water.

Henein said he is more concerned about how governments outside Italy will regulate AI-based tools. Regulators are likely to take the GDPR approach by placing the blame on the shoulders of companies using tools such as ChatGPT, so business he leaders are concerned about future regulation of large-scale language models. It should be kept in mind, he said.

Fast-adopting companies risk becoming more reliant on these new technologies, Henein said, and suddenly become non-compliant when regulations change.

“You can’t pick and choose information from these models. It’s not how they work,” he said. “You can’t just roll back to a point in time and say, ‘I’m going to delete that information.'”

Regulators face a bumpy road ahead for AI regulation

Arthur Herman, senior fellow at the Hudson Institute, a research institute, said generative AI technologies have concerns beyond data privacy.

Concerns include the amount of data that large language models collect and use to power machine learning models. This could include potentially protected data, such as copyrighted material.

“There is a big fear wave going on about AI,” says Herman.

However, Herman has issued a warning to regulators. He said that while holding companies accountable for the damage caused by such technology is important, building trust in the system is more important than increasing regulation. .

Indeed, responding to the Department of Commerce’s request for comment, the Center for Data Innovation said that growing alarm bells about AI systems threaten the United States’ “innovation-friendly approach to the digital economy.”

“Even the most extensive internal review anticipates all potential pitfalls as the best way to achieve better outcomes for consumers is not to bog companies using algorithms with new regulations. You can’t use algorithms and mitigate potential harm,” said Hodan Omaar, senior policy analyst at the Center for Data Innovation, in a statement.

OpenAI responded to many concerns about its technology, including data privacy, in a blog post published earlier this month. The company said it spent six months evaluating GPT-4 iterations to better understand the risks and benefits, but it could take longer to improve the safety of AI systems. said there is.

“[P]Policy makers and AI providers need to ensure that AI development and deployment are managed effectively on a global scale, so no one cuts corners and moves forward,” said an OpenAI blog post. said.

Makenzie Holland is a news writer covering big tech and federal regulation. Before she joined TechTarget’s editorial, Wilmington Star News crime and education reporter wabash plain dealer.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *