YouTube has changed its privacy policy to allow users to request the removal of AI-generated content that resembles their appearance or voice.
“If someone has used AI to alter or create synthetic content that looks or sounds like you, you can ask us to take it down,” YouTube's updated privacy guidelines state. “To be eligible for removal, content must be a realistic alteration or synthesis of your likeness.”
YouTube quietly made the change in June, TechCrunchwho first reported on the new policy.
Removal requests are not automatically approved: YouTube's privacy policy says the platform can give uploaders 48 hours to remove content. If the uploader doesn't take action within that time, YouTube will begin a review.
The Alphabet-owned platform said it considers several factors when deciding whether to remove a video.
- Whether the content has been altered or synthesized
- Whether the content is presented to viewers in an altered or synthetic form
- Can you identify an individual?
- Is the content realistic?
- Whether the content contains parody, satire, or other public interest value
- Whether the content features public figures or celebrities engaging in sensitive activities, such as criminal activity, violence, or endorsing a product or political candidate
YouTube also notes that a “first-party complaint” is required, meaning only the person whose privacy was violated can file a complaint, although there are exceptions, such as when a parent or guardian files the complaint, when the person in question does not have access to a computer, when a legal representative of the person in question files the complaint, and when a next of kin files a complaint on behalf of a deceased person.
It's worth noting that a video being removed under this policy does not count as a “strike” against the uploader, who may still be subject to a ban, loss of ad revenue, or other penalties. Privacy Guidelines And that Community GuidelinesStrikes will only be issued if you violate our Community Guidelines.
The policy is the latest in a series of changes YouTube has made to address the issue of deepfakes and other controversial AI-generated content appearing on its platform.
Last fall, YouTube announced it was developing a system that would allow music partners to request the removal of content that “mimics an artist's distinctive singing or rapping voice.”
This comes after numerous music deepfakes went viral last year, including the infamous “Fake Drake” track that garnered hundreds of thousands of streams before being removed by media platforms.
YouTube also announced that any AI-generated content on its platform must be labelled as such, and introduced new tools to allow uploaders to add labels informing viewers that their content was created by an AI.
“Creators who choose not to continually disclose this information may be subject to content removal, suspension from the YouTube Partner Program, or other penalties,” YouTube said.
YouTube also said that regardless of the label, AI-generated content will be removed if it violates its Community Guidelines.
“For example, a synthetic video depicting realistic violence may be removed if its purpose is to shock or disgust viewers.”
YouTube isn't the only company trying to address the issue of deepfakes on its platform. Tick tock, Meta Other companies are also working to address the issue following controversy over deepfakes that have appeared on their platforms.
The bill is about to be enacted
The issue has also been addressed at the legislative level. The US Congress has AI Fraud Prevention Act House of Representatives and Anti-Counterfeiting Law The Senate is expected to pass a bill that would extend publicity rights to AI-generated content.
These bills would give individuals intellectual property rights over their likeness and voice and allow them to sue creators of unauthorized deepfakes. The proposed laws aim to protect artists from having their work and images stolen and to prevent individuals from being exploited through sexually explicit deepfakes.
As YouTube works to mitigate the worst effects of AI-generated content, it's also working to develop its own AI technology.
Reports last month said the platform was in talks with three major music giants — Sony Music Entertainment, Universal Music Group and Warner Music Group — to license their music to train AI tools capable of creating music. Financial Times.
This follows YouTube's partnership with UMG and WMG last year to work with music artists to create AI music tools.
according to FTYouTube's previous efforts to create AI music tools have not met expectations: Only 10 artists signed up to help develop YouTube's Dream Track tool, which was meant to provide AI-generated music for YouTube Shorts, YouTube's answer to TikTok.
YouTube wants to enlist “dozens” of artists in a new effort to develop AI music tools, a person familiar with the matter said. FT.Global Music Business