Democracy in the age of deepfakes: How AI videos of politicians can “distort, disrupt and corrupt” elections

AI Video & Visuals


A fake video showing former minister George Freeman switching parties has gone viral online.

h

The deepfake video of George Freeman MP announcing his departure from the party has raised alarms about the dangers of AI technology.

Photo: LBC/Getty


Fake videos of politicians can “distort, disrupt and corrupt” democracy, a misinformation charity has warned, as concerns grow over the impact of AI deepfakes on elections.

Full Fact is an independent fact-checking organization dedicated to tackling how inaccurate or misleading claims spread online and driving improvements to systems to stop them from spreading.

The rapid proliferation of realistic AI-generated audio and video is making it harder for voters to know what can and cannot be trusted online, blurring the line between fact and fiction, the report said.

The charity is concerned that this highly realistic content could distort the conversation and disrupt the election cycle, especially as the law has yet to catch up with this rapidly evolving technology.

The threat quickly gained attention in October last year when fake footage was released showing Conservative MP George Freeman defecting to Reform Britain.

A Mid Norfolk MP found himself the subject of an AI-generated video in which his voice was duplicated and his words overwritten.

The video was then posted on social media and went viral. He reported it, but police told him it wasn’t illegal.

The former Minister of State for Science and Technology is now taking action to ban this type of malicious activity.

“The deliberate spread of misinformation through AI-generated content, whether it is aimed at stealing personal information for purposes such as fraud, mis-selling, or political indoctrination, is an alarming and dangerous development.

“As a member of Congress, this type of political disinformation can seriously distort, disrupt, and corrupt our democracy,” Freeman said in a social media post after sharing the video.

‘dangerous’

Full Fact’s Mark Frankel called the phenomenon “dangerous.”

“These tools are creating false narratives and reshaping how audiences consume and access information.”

“If there is a video of a politician saying something that is different from reality or presenting something that is not reality, the public could be misled.”

He says the creators of such manipulated content are often motivated by financial gain from affiliate marketing or are “individuals with a special position, an ax to grind against immigration.”

“They want to disrupt or discredit the government, and they may be using these videos to further spread their ideas.”

But, as Frankel explained, under current law, “no one is obligated to remove these videos or classify them as AI-generated unless a crime is committed, such as inciting racial hatred or spreading terrorism, for example.”

The government has now announced plans to criminalize the creation of sexually explicit “deepfake” images.

This comes amid a backlash in January over images created online using Elon Musk’s AI Grok that digitally removed clothing, mostly of women and girls.

France - Elon - Mask - Released - GROK - Artificial Intelligence - New version

Elon Musk’s Grok chatbot sparked outrage after it was used to digitally undress people on X.

Photo: Getty


Mr Frankel welcomed the changes to the Online Safety Act, but said the government needed to stop “papering over the flaws” in the Online Safety Act and “more comprehensively consider the broader legal but harmful issues that are currently not covered by action”.

Full Fact is calling for the current Representation of the People Bill to include a criminal offense for creating deepfake videos of politicians, to increase transparency and combat political misinformation, particularly during election periods.

During the opening parliamentary debate on the bill on March 2, several members across the House expressed support for this position. James Cleverley told the Conservatives: “We are willing to work on sensible and appropriate measures to ensure that AI-generated materials are clearly labeled and transparency requirements are imposed.”

“Whack-a-mole”

Full Fact believes the government needs to go further.

“The current state of online safety law is extremely messy and unfit for purpose in combating misinformation,” they say in a statement on their website.

“We believe this bill should not be repealed, but rather strengthened. It should be strengthened and made more robust. It is time to introduce a specific AI bill that explicitly refers to deepfakes. This should be included in the King’s Speech, and the government should set a timetable for a consultation on a comprehensive AI bill.”

In a letter, Secretary of State for Science, Innovation and Technology Liz Kendall acknowledged that “legislation needs to adequately reduce the risk of new harms as AI technologies develop.”

“These problems are like ‘whack-a-mole’; when you put out one fire, another one starts,” Frankel said. “We’re at risk of being outpaced by AI, so we need to approach this problem in a more holistic way.”

He added that future regulations should place greater responsibility on technology platforms to label AI-generated content during campaigns.

george freeman

George Freeman believes governments should take “bold action” against AI deepfakes.

Photo: Alamy


“Hijacked for nefarious purposes”

“We cannot allow individuals to live in fear that their identity will be taken over for illicit purposes,” Freeman said in a speech in Parliament on the issue this week.

“The time has come for Parliament and the UK Government to take bold action.

“Denmark has already made great strides in legislating such measures by giving people copyright over their bodies, facial features and voices, and I believe the UK should follow suit.”

“When I was science minister responsible for AI, I refused to ignore proposals for text and data mining without ensuring appropriate safeguards for the creative industries.

“These precautions should apply to all individuals in the UK.”

“Totally unacceptable”

A DSIT spokesperson said: “We are well aware that deepfakes can be divisive, spread false information and influence public opinion. The use of online tools to target and exploit people is completely unacceptable.”

“Under the Online Safety Act, services that allow users to upload content and interact with other users, including social media platforms, must proactively tackle illegal and fraudulent content.

“This includes fraud by misrepresentation, whether shared or created by a user, or otherwise subject to law enforcement.”

“Forment Department”

Shadow Technology Secretary Julia Lopez MP commented: “Technology is rapidly evolving, and our laws must keep pace to protect our democracy.

“We know that deepfake videos are undermining trust and trust in what people see and hear online, and that our enemies are weaponizing that online space to spread misinformation and foster division.

“Governments need to find ways to help people and determine what is real and what is fake so that advances in technology do not undermine what we hold dear. This includes watermarking.”





Source link