EU urges tech giants to label AI-generated content

AI For Business


The European Union has been at the forefront of the rapid adoption of AI technology, and its latest move is to ask tech giants such as Google, Meta, TikTok and Microsoft to provide data on their services as part of an effort to combat misinformation. It includes a request to start labeling AI-generated content. online.

Members of the European Commission, the EU’s executive arm, called on Monday (June 5) for tech giants to start voluntarily well ahead of legislation mandating the labeling of AI content.

The EU is currently working on an AI law that will lay down rules for the use of AI technology in the 27-nation Union. Next week, the European Parliament will take a crucial vote, but even if it does pass, the clause will likely not come into force until 2026. bloomberg report.

Meanwhile, the European Commission’s Vice President for Values ​​and Transparency, Bela Djulova, called on 44 organizations that have signed the EU’s voluntary code of conduct to combat misinformation to address AI-generated misinformation. He said he would request the creation of separate guidelines for

“Signers that incorporate generative AI into their services, such as Microsoft’s Bingchat and Google’s Bard, should take necessary safeguards to ensure that these services are not used by malicious actors to generate disinformation. It should be built in,” Jourova said, as quoted by Politico.

“Signatories with services that may disseminate AI-generated disinformation should deploy technology to recognize such content and clearly label it to users.”

Signatories to this Code of Conduct include Google, Facebook, Instagram owners Meta, Microsoft, TikTok, and Twitch.

Among the many concerns the EU seeks to address include creating ‘deepfakes’ where celebrities and private citizens say or do things they wouldn’t say or do in real life. . A good example of this is Deepfake, where former President Barack Obama warned about the dangers of deepfakes. Former President Obama never spoke.

Late last month, images of what appeared to be AI-generated smoke near the Pentagon in Washington, D.C., and claims that an explosion had taken place at a military installation caused a temporary panic in the stock market.

One of the immediate concerns for the music business is the prevalence of music that uses the voices of known artists for AI-generated songs that the artists have never performed. For example, his AI-generated track featuring vocals from Drake and The Weeknd went viral earlier this year.

It is unclear whether all operators of search engines and social media sites such as Facebook and TikTok have the necessary tools to identify AI-generated content when it is displayed, although many It is clear that the operators of are working rapidly to develop its features.

At its I/O conference in May, Google announced a new tool that allows users to see if an image is AI-generated, thanks to hidden data embedded in AI-generated images. The tool will be available to the public this summer.

Image editing software maker Adobe has implemented a tool called “Content Credentials” that can detect when an image has been altered by AI.

Similar efforts are underway at music companies. Believe CEO Denis Ladegaillerie said in May that the company is working with AI companies to introduce AI detection mechanisms into Believe’s platform, and those tools will be deployed within the next quarter or two. said it should.

“We believe this was a mistake on Twitter’s part…they chose confrontation, which was very much noted in the committee.”

Bela Djulova, European Commission

Furthermore, Twitter announced last Thursday (May 30) The company announced it would be rolling out a “media note” feature that allows trusted users to add information to images, such as a warning that the image was generated by AI. This message also appears on duplicates hosted on other of his Twitter accounts. Twitter cited “AI-generated images” and “duplicate videos” as reasons.

However, unlike Adobe and Google, Twitter has not signed the EU Code of Conduct. Owner Elon Musk reportedly pulled the social media site out of the group last month, prompting a harsh reaction from some e-commerce executives.

“Obligations remain,” said EU Internal Market Commissioner Thierry Breton. said in a tweet “You can run, but you can’t hide,” he said on Twitter on May 26.

“I believe this is a mistake on Twitter,” Julova added on Monday. Politico. “They chose confrontation, which was also very much noticed in the committee.”

Breton pointed out that after August 25th, the code of conduct will no longer be voluntary, but will become a legal obligation under the EU’s new Digital Services Act (DSA).

Based on the DSA, very large scale online platforms (VLOPs) such as Twitter and TikTok, as well as widely used search engines such as Google and Bing, have made deepfakes, whether images, audio or video, “prominent.” If you do not identify it with “marking”, you will be fined a large amount. .

The European Parliament is working on similar rules to apply to companies that generate AI content as part of AI law. Politico report.

Code of Conduct participants will release a report in mid-July detailing efforts to stop misinformation on the network and plans to prevent AI-generated misinformation from spreading through platforms and services. must be announced to Politico Added.global music business





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *