Regulator dusts off rulebook for tackling generative AI like ChatGPT

AI For Business


  • Watchdogs are racing to keep up with the mass adoption of AI
  • New Laws Pending, Regulators Adapting Existing Laws
  • Generation tools face privacy, copyright and other challenges

LONDON/STOCKHOLM, May 22 (Reuters) – As the race to develop more powerful artificial intelligence services like ChatGPT accelerates, some regulators are pushing for technologies that could upend the way society and businesses operate. It relies on old laws to regulate.

The European Union is at the forefront of drafting new AI rules that could serve as a global benchmark for addressing privacy and safety concerns that have arisen with rapid advances in the generative AI technology that underpins OpenAI’s ChatGPT. standing in the front line.

However, it will be several years before the bill is enacted.

“In the absence of regulation, the only thing governments can do is apply existing rules,” said Massimirano Chimnaghi, a European data governance expert at consulting firm BIP.

“If it concerns the protection of personal data, data protection laws apply. If it is a threat to the safety of people, there are regulations regarding AI that are not specifically defined, but still apply.”

After Italian regulator Galante took its ChatGPT service offline and accused OpenAI of violating the EU’s GDPR, a broader privacy regime enacted in 2018, European national privacy watchdogs 4 In May, it set up a task force to deal with ChatGPT issues.

ChatGPT was revived after a US company introduced an age verification feature that allowed European users to block their information from being used to train AI models.

A person close to Galante told Reuters the agency would begin a broader review of other generative AI tools. French and Spanish data protection authorities also launched investigations into OpenAI’s compliance with privacy laws in April.

bring in an expert

Generative AI models have become well known for making mistakes, “hallucinating” and spewing false information with eerie certainty.

Such errors can have serious consequences. When banks and government departments use AI to expedite decision-making, individuals could be unfairly denied loans and benefits. Big tech companies, including Alphabet Inc.’s Google (GOOGL.O) and Microsoft (MSFT.O), have stopped using AI products deemed ethically risky, as well as financial products.

According to six US and European regulators and experts, regulators are already working to cover everything from copyright and data privacy to two key issues: data fed to models and content generated by models. It aims to apply the rules of

Former White House technical adviser Suresh Venkatasbramanian said agencies in both regions were encouraged to “interpret and reinterpret their mandates.” He referred to an algorithmic investigation of discriminatory behavior under existing regulatory powers by the Federal Trade Commission (FTC).

In the EU, proposed EU AI law would require companies like OpenAI to disclose copyrighted material such as books and photographs used to train their models, making them more vulnerable to legal challenges.

However, according to Sergey Lagodinsky, one of several politicians involved in drafting the EU proposal, proving copyright infringement is not easy.

“It’s like reading hundreds of novels before writing your own,” he said. “It is one thing to actually copy something and publish it.

“Think creatively”

French data regulator CNIL has begun to “think creatively” about how existing laws apply to AI, said Bertrand Paille, technology chief.

For example, in France claims of discrimination are usually handled by the Defenseur des Droits. But the lack of expertise on AI bias has led the CNIL to take the lead on the issue, he said.

“We remain focused on data protection and privacy, but we are considering all implications,” he told Reuters.

The organization is considering using GDPR provisions to protect individuals from automated decision-making.

“At this stage, we cannot say if it is legally sufficient,” said Peirhess. “It takes time to come to a conclusion and there is a risk that different regulators will disagree,” he said.

In the UK, the Financial Conduct Authority is one of several state regulators tasked with creating new guidelines for AI. A spokeswoman told Reuters it was in talks with the Alan Turing Institute in London, along with other legal and academic institutions, to better understand the technology.

As regulators adapt to the pace of technological advancement, some industry players are calling for greater engagement with corporate leaders.

Harry Borovic, general counsel for Luminance, a startup that uses AI to process legal documents, told Reuters that dialogue between regulators and companies has been “limited” so far.

“This doesn’t bode particularly well for the future,” he said. “Regulators appear to be slow or reluctant to implement approaches that allow for the right balance of consumer protection and business growth.”

Reports by Martin Coulter from London, Spanza Mukherjee from Stockholm, Kantaro Komiya from Tokyo and Elvira Polina from Milan. Editing: Kenneth Li, Matt Scuffham, Emelia Sithole-Matarise

Our standards: Thomson Reuters Trust Principles.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *