Researchers have expressed concern that AI-generated content could be used as misinformation to interfere with this fall's U.S. elections. TikTok was already part of a group of 20 tech companies that signed an agreement pledging to fight this earlier this year.
YouTube, owned by Alphabet Inc.'s Google, and Metaplatforms, which owns Instagram and Facebook, have also said they plan to use content credentials.
For the system to work, both the makers of the generative AI tools used to create the content and the platforms used to distribute the content must agree to the use of industry standards.
For example, when a user generates an image using OpenAI's Dall-E tool, OpenAI attaches a watermark to the generated image. When a marked image is uploaded to her TikTok, it will automatically be labeled as her AI-generated.
TikTok, owned by China's ByteDance, has 170 million users in the U.S., and a law was recently passed that would force ByteDance to sell TikTok or face a ban. is what happened. TikTok and ByteDance have filed suit to block the law, claiming it violates the First Amendment.
TikTok already labels AI-generated content created using tools within the app, but the latest move means the labels will also apply to content generated outside of the service.
“We also have a policy that prohibits unlabeled Realistic AI, so if Realistic AI (generated content) appears on our platform, we will remove it as a violation of our Community Guidelines,” the company said. said Adam Presser, Director of Safety. TikTok said in an interview.
(Reporting by Stephen Nellis in San Francisco; Editing by Diane Craft)
Disclaimer: This report was auto-generated from Reuters News Service. ThePrint assumes no responsibility for its content.
View full article
