LONDON (AP) – The European Union is calling on online platforms such as Google and Meta to step up its fight against misinformation by adding labels to text, photos and other content generated by artificial intelligence. the official said on Monday.
European Commission Vice-President Bela Djulova said the ability of a new generation of AI chatbots to create complex content and visuals in seconds poses “new challenges in the fight against disinformation”.
read more: How AI turns text into images
He said he called on Google, Meta, Microsoft, TikTok and other tech companies that signed the 27-country bloc’s voluntary agreement to fight disinformation to tackle the AI problem.
Online platforms that integrate generative AI into their services, such as Microsoft’s search engine Bing and Google’s chatbot Bard, can be used by “malicious actors” to generate disinformation, Djulova said at a press conference in Brussels. He said it is necessary to build safeguards to prevent
Companies with services that could disseminate AI-generated disinformation should deploy technology to “recognize such content and clearly label it to users,” he said.
Google, Microsoft, Meta and TikTok did not immediately respond to requests for comment.
Julova said EU regulations are meant to protect free speech, but when it comes to AI, “I don’t think machines have the right to free speech.”
The rapid rise of generative AI technology, with the ability to generate human-like text, images and videos, has amazed many and stunned others about its potential to transform many aspects of everyday life. . Europe has played a leading role in the global move to regulate artificial intelligence with an AI law, but the law is still subject to final approval and could take years to come into force.
read more: Artificial Intelligence Increases Extinction Risk, Experts Warn
EU officials are also set to introduce another rule this year to protect people from harmful online content, but fear they need to act faster to keep up with the rapid development of generative AI.
Recent examples of deepfakes debunked include a realistic photo of Pope Francis in a white fluffy jacket and an image of black smoke rising next to a building with claims of an explosion near the Pentagon. and so on.
Politicians even enlist AI to warn of its dangers. Danish Prime Minister Mette Frederiksen last week used OpenAI’s ChatGPT to create the opening of her parliamentary speech, written “with incredible certainty that there are robots behind it, not humans.” said.
European and U.S. officials announced last week that they were developing a voluntary code of conduct on artificial intelligence that could be finalized within weeks as a way to bridge the gap before the EU’s AI rules come into force.
A similar voluntary effort in the EU’s disinformation policy is set to become a legal obligation under the EU’s Digital Services Act by the end of August, allowing the biggest tech companies to avoid hate speech, disinformation and other harmful It will strengthen the crackdown on the platform to protect users from unscrupulous materials.
But Julova said these companies should start labeling AI-generated content immediately.
Most digital giants have already signed up to the EU Disinformation Code, which requires companies to measure their efforts to combat misinformation and issue regular reports on their progress.
Twitter exited last month in what appeared to be the latest move to loosen restrictions on social media companies after Elon Musk bought the company last year.
The resignation came under heavy fire, and Mr. Yurova said it was a mistake.
read more: Regulators target rapid development of AI technology to protect consumers and workers
“Twitter chose the hard way. They chose confrontation,” she said. “Don’t get me wrong, leaving code behind will draw a lot of attention to Twitter, and its actions and compliance with EU law will come under vigorous and urgent scrutiny.”
Twitter faces a major test later this month when European Commissioner Thierry Breton heads to its San Francisco headquarters with a team to conduct a “stress test” aimed at measuring the platform’s ability to comply with the Digital Services Act. I plan to.
Digital policy chief Bretton told reporters on Monday that other Silicon Valley tech companies, including OpenAI, semiconductor maker Nvidia and Meta, will also be visiting.
Associated Press reporter Jan M. Olsen contributed from Copenhagen, Denmark.