Exceptions to first-party claims include removal requests where the individuals affected by the AI content are minors, deceased, or do not have access to a computer. And even if you request the removal of such content, there is no guarantee that it will be removed, even if it is your appearance or voice that has been copied by the AI content. YouTube warns that it makes its own decision on whether to remove content based on a variety of factors.
These factors include whether the content is AI-generated or synthetic, whether specific people are identifiable, and whether the content is considered satire, parody, or of public value. YouTube also determines whether the content features well-known or public figures and whether they are engaged in “sensitive” activities, such as criminal activity, violent acts, or supporting political candidates.
Given that this is a US election year, YouTube is extremely concerned about AI content displaying famous or public features that falsely endorse a particular candidate. While not directly related to the upcoming election, this year we have seen a surge in AI videos imitating Donald Trump's voice and image. Trump may ask YouTube to remove these videos.
If YouTube receives a complaint about AI content and asks for its removal, we ask the person who uploaded the content to respond within 48 hours. If the content is removed within 48 hours, the complaint is closed. If it's not removed within two days, YouTube will review the situation. If the video is removed, it will be permanently removed from the platform and the person's name and personal information will be removed from the video's title, description and tags.
Users can also blur faces of people in their videos, but making a video private does not mean it complies with a removal request, as private videos can be reverted to public at any time. Simply labeling a video as AI content does not mean it will not be removed if it does not comply with YouTube's Community Guidelines.