The “Similarity Detection” tool works similarly to YouTube’s Content ID copyright protection system. Rather than searching for copyrighted audio or video, the program searches for AI-generated images that resemble a person’s face or appearance in uploaded content. If a match is found, users can review the identified video and request removal if it violates YouTube’s privacy guidelines.
Fraudsters have found AI videos of celebrities, also known as “deepfakes,” to be particularly effective. After initial testing, the technology was made available to approximately 4 million YouTube creators through the YouTube Partner Program in October 2025.
YouTube’s “similarity detection tool” will initiate contact with politicians and journalists on the platform, a company official said. You can then choose whether to sign up and use the service.
The move comes amid growing concerns over the rapid proliferation of fake media created by artificial intelligence, which can accurately portray statements and actions that individuals have never actually made.
These fake films are increasingly used to spread false information, influence public opinion, and damage people’s reputations.
YouTube noted that the increased access is aimed at protecting public figures in public discussions where the potential for impersonation is high.
Officials say public leaders and journalists are often the focus of manipulated media, which can sway political discourse and undermine public trust in reporting.
According to the YouTube blog, the platform “has a long history of protecting free expression and content in the public interest. This includes preserving content such as parody and satire, even when used to criticize world leaders and influential figures.”
How the “similarity detection tool” works
Program participants must authenticate themselves by providing a short video sample and a government-issued ID to help the system recognize similarities. After registration, the AI looks for potential facial matches in videos posted to the platform.
If a suspected deepfake is found, users can review the footage and request its removal through YouTube’s moderation process. However, the company says not all requests will result in removal, especially if the content is satire or parody or complies with the platform’s free expression policy.
Data submitted by users will only be used to “enhance” detection tools and not to train Google’s AI models that manage YouTube, a representative told NBC News.
“We have plans to significantly expand access over the next year, and our goal is to get this technology into the hands of those who need it,” they added.
For now, facial similarity is the main focus of similarity detection techniques. But YouTube is also looking into voice imitation, according to Amjad Hanif, vice president of creative products. Similar to the Content ID system, YouTube is starting to consider allowing users to earn money from likes on identified videos.
