Deepfakes on YouTube are on the rise, but now YouTube can flag specific AI content

AI Video & Visuals


Key Takeaways

  • YouTube has introduced the ability to report AI-generated videos featuring individuals who do not want their likeness used.
  • AI-generated content, including video, is on the rise and can be used for both harmless parody and dangerous misinformation.
  • YouTube's privacy complaint process allows users to report AI-generated content, and the video has 48 hours to be removed.



Artificial intelligence is slowly but surely starting to have tangible effects on our lives, some good and some bad, forcing tech companies to take steps to protect the people who use their platforms. This includes YouTube, which now allows individuals to report AI-generated videos of themselves.


AI-generated YouTube videos are on the rise

AI-generated content is everywhere: news reports and online articles are generated by AI bots like ChatGPT and Gemini, photos and artwork are generated by the likes of DALL-E and Imagen, and videos are generated by a variety of constantly improving text-to-video AI tools.

Of the various forms of content that AI can create, video is currently the least sophisticated and there are many examples of AI models creating extremely strange and disturbing clips, although the technology is improving and more simple forms such as putting words in someone's mouth are becoming harder to detect.


AI-generated videos range from harmless parodies in which celebrities say things they didn't actually say, to deepfakes created to spread misinformation. The former are entertaining but can still be upsetting for the celebrities involved, while the latter are being used to push policies, destabilize democracies, and sway elections.

Either way, as the technology improves and the number of AI-generated videos grows, online video sites and social media platforms will be forced to put up barriers to prevent misuse, which is why YouTube is taking action to address the issue, as first reported by TechCrunch.


How to report AI-generated content in a YouTube video

YouTube has quietly rolled out an option that allows individuals to report videos in which an AI-generated or other synthetic character appears looking or sounding like them, in accordance with its privacy guidelines, regardless of whether the video is intended to cause harm (such as to discredit someone) or something more innocuous (but still unwanted).

If you see an AI-generated or synthetic representation of yourself in a YouTube video, you can now submit a takedown request, which will be done through our privacy complaint process. This is different from the feature that allows anyone to report a YouTube video for other forms of misuse, as the affected individual must report the issue.


To report a video that contains an AI-generated or synthetically altered likeness of you, follow YouTube's privacy complaint process. Once you get to step six of six, you'll be able to “Report Altered or Synthetic Content.” YouTube has a page explaining these labels, but the short version is that they're “content that looks or resembles you, but has been significantly edited or generated by AI or other tools.”

What happens when you report AI content on YouTube?

If you report altered or synthetic content via YouTube's Privacy Complaint Procedure, the responsible YouTuber will be asked to remove the video within 48 hours. If the video is not removed within that time frame, YouTube will investigate further.

YouTubers also have the option to remove personal (and identifying) information from their videos and blur the faces of people involved in them, but simply making a video private is not enough to get a claim dismissed, as YouTube doesn't expect the uploader to not change the status back to public at a later date.


YouTube has not promised to remove all AI-generated content that stakeholders report, instead promising to “consider a variety of factors when evaluating complaints.” It also requires that complaints be made by the people involved in all but a small number of cases, meaning that complaints cannot be made on behalf of celebrities.

Still, this is a good first step in a problem that is sure to grow in the coming years.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *