AI-generated photos and videos pose a threat to general elections as they could be used as “deepfake” images to attack the character of politicians, spread hatred and undermine trust in democracy.

AI Video & Visuals


  • The Cetas investigation prompted Ofcom and the Electoral Commission to address the use of AI.
  • It turns out that AI images and videos could influence the next general election



Experts say regulators must “act swiftly” to introduce safeguards to protect the electoral process from threats posed by artificial intelligence.

The warning comes following research suggesting that fake AI-generated photos and video footage could be used to influence the upcoming general election in a range of malicious ways.

The report concluded that so-called “deepfake” images could be used to characterize politicians, spread hatred, undermine trust in democracy and create false support.

A study by the Alan Turing Institute's Centre for Emerging Technologies and Security (Cetas) has called on Ofcom and the Electoral Commission to address the issue of AI being used to mislead the public, arguing that it is undermining confidence in the fairness of elections.

Regulators need to “act swiftly” to put in place safeguards to protect the election process from threats posed by artificial intelligence, experts say (Stock Image)
The warning comes following research suggesting that fake AI-generated photos and video footage could be used to influence the upcoming general election in a range of malicious ways (stock image)

The report argued that the Election Commission and the Ministry of Communications should draw up guidelines setting out how political parties should use AI in their election campaigns, call on them to enter into voluntary agreements, and require election materials created by AI to clearly state this.

Click here to resize this module

The researchers warned that there are currently “no clear guidelines” to prevent AI from being used to create misleading content about elections.

Some social media platforms have already started to label AI-generated content, following concerns about deepfakes and misinformation, as well as a series of cases in which AI has been used to create or alter images, audio and video of politicians.

In its study, Cetas said it had drawn up a timeline of how AI could be used ahead of elections, suggesting it could be used to damage candidates' reputations, falsely claim that candidates have withdrawn from elections, or use disinformation to shape voter attitudes on certain issues.

The study also notes that misinformation about how, when and where to vote could be used to undermine the electoral process.

The study concluded that so-called “deepfake” images could be used to characterize politicians, spread hatred, undermine trust in democracy and create false support. (Stock image)

“With just weeks to go until the general election, political parties are already in the middle of busy campaigning,” said Sam Stockwell, a research fellow at the Alan Turing Institute and lead author of the study.

Click here to resize this module

“Currently, there are no clear guidelines or expectations to prevent AI from being used to create false or misleading election information.

“That's why it's so important that regulators act quickly before it's too late.”

“Regulators can do more to help the public distinguish fact from fiction and to ensure voters do not lose faith in the democratic process,” Cetas director Dr Alexander Babta said.

Meanwhile, the chairman of the House of Commons Science Committee warned that the UK's AI watchdog is “under-resourced” compared with technology developers.

In its report on AI governance, the Science, Technology and Innovation Committee said the £10 million the government announced in February to help Ofcom and other regulators keep up with developments in AI technology was “clearly insufficient”.

It added that the next administration should announce further financial support “commensurate with the scale of the challenge” and “consider the merits of a one-off or ongoing industry levy” to support regulators.

The commission's outgoing chairman, Greg Clark, said he was “concerned” that UK regulators “have insufficient resources compared with the funding available to large developers”.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *