Voters beware: The age of deepfake democracy

AI Video & Visuals

Generative artificial intelligence is increasingly being used in political campaigns around the world. For example, in Slovakia, AI-created audio recordings featuring fake voices of several candidates discussing election fraud went viral days before the presidential election. In the UK, deepfake ads of Chancellor Rishi Sunak reached hundreds of thousands of people on Facebook. In the US, many Democratic voters received calls featuring a cloned voice of President Joe Biden. TikTok has seen a network of accounts impersonating media outlets using cloned voices, including former President Barack Obama. Similar incidents have also occurred in India, Nigeria, Sudan, Taiwan, Moldova, South Africa, Bangladesh and Ethiopia.

The technology to create these deepfakes has existed for years, but it has recently become more feasible and accessible. The cost of creating these fakes is now negligible, making it increasingly difficult to distinguish truth from lies. With more than 50 countries voting in 2024, regulators are rushing to draft legislation to restrict the use of AI to create fake text, audio, and video. But the technology is evolving faster than regulatory efforts, creating a dangerous vacuum.

2 View Gallery

If you don't hear back within an hour, press cancel.If you don't hear back within an hour, press cancel.

Fake video of British Prime Minister Rishi Sunak

(Photo: YouTube screenshot)

While not all uses of generative AI are harmful (in fact, AI had a positive impact on India's recent elections by making President Modi more accessible to voters who speak different languages), the tool carries significant risks in spreading disinformation. The key issue is the distribution of synthetic content, not necessarily its creation. Used ethically, AI could usher in a new era of representative government, but the “if” is key and should not be underestimated.

Combating misinformation

Fears of a proliferation of misleading information designed to deceive voters are widespread. The US government has mandated companies to develop and label synthetic content. In Israel, police are seeking tools to detect video editing and deepfakes. The European Commission has asked major social networks to label AI-generated content. In Munich, 20 technology companies signed a pact to combat the deceptive use of AI in elections, and OpenAI, Google and Microsoft have delayed the release of audio playback tools over concerns about election-related deepfakes. Academics and industry leaders have also called for government regulation to curb the spread of deepfakes.

However, these measures are non-binding and often lack penalties for violations. As technology advances, threats grow and tools to detect AI-generated content become less reliable. Companies that develop these AI models have little incentive to build detection tools, leaving the field open to manipulation.

Dr. Oren Etzioni, founding CEO of the Allen Institute for Artificial Intelligence, founded to help people identify real and fake content. TrueMedia has developed about 15 models to identify AI-created audio, video, and images, achieving about 90% accuracy. However, common sense and source verification are still essential.

Complicating the issue is an ongoing tech race between detection tools and AI companies aiming to create indistinguishable synthetic content, and the fear of deepfakes, which are sometimes used to discredit real content, adds another layer of complexity.

2 View Gallery

Tuesday, June 21, 2018 11:30 PM Email:, June 21, 2018 11:30 PM Email:

Dr. Oren Etzioni, Founder,

(Photo: Amit Shah)

“The system works with around 90% accuracy, so it will inevitably make mistakes in certain circumstances,” Etzioni told Calcalist, adding that while developing tools is important, humans are at the heart of it. “At the end of the day, it's not possible without common sense, and people need to check the sources as well. There are no absolute answers.”

Active and Passive Detection

TrueMedia's passive approach, which involves identifying fakes after they're created, is often too late. A complementary active approach, which involves embedding digital “noise” and marking content as synthetic during creation, faces challenges due to conflicting interests: Companies like Google and Meta, which own both the AI ​​models and the social networks, have little incentive to implement more stringent measures that could hinder their business models.

Projects like Etzioni's offer some hope, but they have limitations: OpenAI has released a tool to identify images produced by the DALL-E 3 generator, but has only opened it up to a limited group of researchers; Google and Meta are collaborating to develop standards for content provenance and authenticity, but much work remains.

The rapid advancement of AI technology poses significant challenges to ensuring the integrity of elections around the world. Despite efforts to develop detection tools and establish regulations, the pace of technological development is outpacing these efforts. Vigilance, ethical use of AI, and comprehensive regulatory frameworks are essential to safeguarding democratic processes in the age of deepfakes.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *