The plot of the video is clear: Nigel Farage appears on screen as a game commentator, explaining how he is playing Minecraft. The leader of the Reform UK Party explains that he is going to log onto Chancellor Rishi Sunak's server, find the prime minister's virtual home in the video game, and blow it up.
Mr Farage's distinctive voice can be heard explaining what he is about to do: “I've packed it full of TNT and I want to let you all know that I have absolutely no Sky TV signal in or around the house.”
A spokesman for Mr Farage admitted, with some indignation, that the video was “of course” not real and that the Reform Party leader had not been livestreaming a Minecraft tutorial during the election campaign.
“But it's pretty funny,” the spokesperson added.
The spokesperson's response summed up the role of deepfake videos in this general election, which so far have not caused the disruption that some had predicted before the campaign.
Instead, deepfakes – digital content manipulated using artificial intelligence to often place famous people in fictional situations – have mainly existed in the form of obviously false memes, such as an edited version of Chancellor Rishi Sunak's national service scheme in which the Prime Minister appears to be teaching primary school students how to play Fortnite.
The clip of Sunak, along with a deepfake video of Farage and footage featuring Keir Starmer, were created and uploaded to TikTok by PodcastPilotPro, a subscription AI app which allows users to pretend to be on a podcast with a celebrity.
Most users seem to be able to tell that the AI-generated videos are fake, but are simply impressed by their cleverness. Or, as one highly popular comment on Farage's video put it, “Old people will be fooled by AI.”
But crudely manipulated real video has proven a more effective tactic so far. “I don't think we should celebrate yet. Campaigns still have a long way to go,” said Tim Gatto, a digital campaign consultant. “But you don't necessarily need to be a very sophisticated deepfake to manipulate or deceive the public.”
“On Twitter, for example, we've seen numerous examples of people engaging with and sharing misleading content that is very simply crafted to believe that it's true or that it's consistent with something they strongly believe.”
A group of left-wing users opposed to Keir Starmer's Labour party had been using the social network X to spread a poorly dubbed video falsely suggesting that Shadow Health Secretary Wes Streeting had criticised Labour candidate Diane Abbott. When the BBC contacted X about the video, it was removed and the account banned.
Ciaran Martin, former chief executive of the National Cyber Security Centre, said that with a few exceptions, such as the recent Slovak election, “it has proven extremely difficult to deceive large numbers of voters with deepfakes”.
Writing in the Guardian last week, he said the key was how “quickly and comprehensively” deepfakes were uncovered, and that the real risks lay at local constituency level.
Such was the case in one of the most damaging fake videos in British politics this year: a teacher in Dudley was falsely accused of being racist after handing out Labour leaflets. Legitimate doorbell camera footage was doctored and overlaid with fake subtitles claiming the teacher had used racist language.
The video was widely distributed by Ahmed Yaqoob, a local lawyer and social media celebrity who is a pro-Gaza Labour opponent in Ladywood, Birmingham. He later apologised, saying he did not understand what had happened and that the video he was sent was “already subtitled”.