TikTok becomes first platform to require watermarking of AI content • The Register

AI Video & Visuals


TikTok plans to start labeling AI-generated images and videos uploaded to its video-sharing service.

“TikTok is starting to automatically label AI-generated content (AIGC) when uploaded from certain other platforms. To do this, we are partnering with the Coalition for Content Provenance and Authenticity. (C2PA) to become the first video sharing platform to implement this “Content Credentials technology,” the company revealed.

The Chinese short-form video platform said it plans to extend this feature to audio-only content “soon.” TikTok already labels AI-generated content created within the app, and creators will be required to label realistic AI as such. How effective the latter is is debatable.

Content Credentials was created by C2PA, a company co-founded by Arm, BBC, Intel, Microsoft, Truepic, Adobe, and Microsoft. Its goal is to form open, royalty-free technology standards to combat disinformation.

This technology acts as a watermark by attaching metadata to content, which TikTok can use to instantly recognize and label AIGC.

“We know who created it, when it was created, what edits were made, and whether AI was used,” Adobe Chief Trust Officer Dana Rao told TV explained in an interview. He compared it to the nutrition label for the serving size.

The era of deepfakes has arrived

Concerns are growing around the world about humans' ability to decipher deepfakes, whether they're job seekers for remote IT workers or scammers seeking money or pornography.

Just this week, the internet was entertained with a stream of AI-generated images of celebrities not attending the Met Gala. The fake was so real that even pop star Katy Perry's mother was fooled.

“AI-generated fake photo from Met Gala is a dangerous omen of what's to come between now and the election.” observed One individual based in the United States.

Microsoft Threat Analysis Center manager Clint Watts warned last month that it would be surprisingly easy for deepfakes to subvert elections. Microsoft should know that their VASA-1 tool is deemed too dangerous to release due to ethical considerations.

This is the scenario currently playing out in India, where AI deepfakes of Bollywood stars support political parties and level criticism against the backdrop of elections that will determine the fate of current Prime Minister Nagendra Modi.

Meanwhile, OpenAI released model safety guidance earlier this week, acknowledging that it is considering ways to support the creation of NSFW or “not safe for work” content.

government intervention

Spurred by concerns over images and videos used to reproduce both a Bollywood actor and a lawmaker, India's Ministry of Electronics and IT (MeitY) announced last fall that social media companies had to close their companies within 36 hours of the deepfakes being reported. issued a recommendation that it should be removed from the platform.

Failure to take action leaves organizations liable for third-party information hosted on the platform.

Meanwhile, American entrepreneur Kathy Ho recently found herself in the middle of a TikTok deepfake nightmare after one of her clothing designs went viral online. She found an image of her own body superimposed with another face in a video on her TikTok posted by a faker of her skirt design who needed promotional content.

She described it as feeling “like being in an episode of Black Mirror” and urged her followers to report the incident.

“Your use of the report button is as powerful as mine. All the power we can have will be strength in our numbers,” Ho implored.

“Honestly, it's time for the Department of Commerce to seriously crack down on counterfeit goods,” said one fed-up believer.

requested by the US Department of Commerce [PDF] Adds $62.1 million in fiscal year 2025 to “protect, regulate, and promote AI, including to protect Americans from societal risks.”

In testimony defending the budget before the House Appropriations Committee, U.S. Secretary of Commerce Gina Raimondo said these funds would go to the AI ​​Safety Institute.

“Everyone, including myself, is worried about synthetic content and we want companies to put watermarks on things that are generated by AI. So what is a good watermark? Appropriate What is a red team? We're going to create a team called the AI ​​Safety Institute to develop standards that will help Americans stay safe.'' “Safe,'' she explained.

“We’re investing in scientists, we’re investing in policymakers. [the National Telecommunications and Information Administration (NTIA)] Please help shape AI policy,” she added.

Watermarks are not foolproof

Unfortunately, Watermark may not be the savior it is advertised to be. A team from the University of Maryland in the US investigated the reliability of digital image watermarking technology and found that watermarking technology is not very robust.

Researchers developed a watermark-breaking attack and were able to destroy all existing watermarks they encountered.

“Like some other issues in computer vision (e.g. adversarial robustness), we think image watermarking will become a competition between defense and attack in the future,” boffins said. ®





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *