AI-Generated Disinformation Threatens to Mislead Voters in 2024 Elections

AI Video & Visuals


WASHINGTON (AP) – Computer engineers and tech-minded political scientists are using cheap and powerful artificial intelligence tools to quickly create fake images, videos and audio that are realistic enough to deceive voters and potentially undermine elections. But I’ve warned for years that I’ll be able to create one.

The composite images that emerged were often crude, unconvincing, and expensive to produce. Especially when other kinds of misinformation are very cheap and easily spread on social media. The threat posed by AI and so-called deepfakes has always seemed a year or two away.

no more.

Sophisticated generative AI tools can now create human voice clones and surreal images, video and audio in seconds at minimal cost. Tied to powerful social media algorithms, this fake digitally-generated content can spread rapidly across a wide area, potentially targeting very specific audiences, pushing campaigns’ dirty tricks to new lows. can be raised to

The implications for the 2024 campaign and election are both alarming and significant. Generative AI can not only quickly create targeted campaign emails, texts and videos, but also to mislead voters, impersonate candidates, and undermine elections at scale and pervasive. may be used. Never before seen speed.

clock: Security Experts Warn of Potential Threat to Democracy in AI Tools

“We are not ready for this,” warned AJ Nash, vice president of intelligence at cybersecurity firm Zerofox. “For me, the big breakthrough is the availability of audio and video capabilities. If we can bring it to scale and distribute it on our social platform, it will have a big impact. ”

AI experts have quickly identified a number of alarming scenarios in which generative AI is used to create synthetic media to confuse voters, defame candidates, or incite violence. can be resolved to

An automated robocall message in the candidate’s voice telling voters to vote on the wrong day. Audio recordings of candidates who have allegedly confessed to a crime or expressed racist views. Video footage of someone giving a speech or interview they have never given. Fake images masquerading as local news reports, pretending that a candidate has dropped out of the race.

“What if Elon Musk called you personally and told you to vote for a particular candidate?” Oren Etzioni, founder of the for-profit organization AI2, said: “A lot of people will listen, but it’s not him.”

clock: The ‘godfather of AI’ talks about the dangers technological development poses to society

Former President Donald Trump, who is running for president in 2024, shares AI-generated content with his followers on social media. A manipulated video of CNN host Anderson Cooper, shared on the Truth Social platform by President Trump on Friday, distorted Cooper’s reaction to a CNN town hall with President Trump last week and was created using an AI voice cloning tool. It was what was done.

A dystopian campaign ad unveiled last month by the Republican National Committee offers a glimpse into this digitally manipulated future. The online ad, which was posted after President Joe Biden announced his re-election campaign, features a bizarre and slightly distorted image of Mr. Biden and the text, “What if the weakest president ever gets re-elected?” begin.

Here’s a series of AI-generated images: Taiwan under attack. In the United States, storefronts are boarded up as the economy collapses. Soldiers and armored military vehicles patrol the local streets as waves of tattooed criminals and immigrants cause panic.

The description of the RNC ad reads, “AI-generated speculation about the future of this country if Joe Biden is re-elected in 2024.”

Petko Stoyanov, global chief technology officer of Austin, Texas-based cybersecurity firm Forcepoint, said the RNC allowed its use of AI, but not others, including nefarious political campaigns and foreign adversaries. said no. Stoyanov predicted that groups seeking to interfere in American democracy would use AI and synthetic media as a means to undermine trust.

“What if a cybercriminal or an international organization such as a state impersonates someone? What are the consequences? Do we have a remedy?” Stoyanov said. “There will be more and more misinformation coming from international sources.”

read more: Twitter, Facebook remove state-backed foreign accounts

AI-generated political disinformation goes from a modified video of Biden appearing to give a speech attacking transgender people to an AI-generated image of children believed to be learning Satanism in a library. Until now, it’s already circulating online ahead of the 2024 election.

Former President Trump was indicted in Manhattan Criminal Court for falsification of business records, and even though he did not take a photo of his face when he was indicted, there is also an AI image that seems to have taken a photo of President Trump’s face. deceived some social media users. Other AI-generated images showed President Trump resisting arrest, but the creators were quick to acknowledge their origins.

Rep. Yvette Clark (New York) introduced a bill to the House of Representatives that would require candidates to be labeled in AI-generated election ads. He is also the proponent of a bill that would require those who create synthetic images to add a watermark to indicate that fact.

clock: Potential dangers as artificial intelligence becomes more sophisticated and pervasive

Some states have their own proposals to address concerns about deepfakes.

Clark said his biggest concern is that generative AI could be used to create videos and audio that incite violence and turn Americans against each other before the 2024 election.

“It’s important to keep up with technology,” Clark told The Associated Press. “We have to put up guardrails. People can be fooled, but it’s only for a second. People are so busy with their own lives that they don’t have time to check all the information. It’s political season and it can be very confusing.”

Earlier this month, a political consultancy trade group in Washington condemned the use of deepfakes in political advertising, saying deepfakes “have no place in legitimate and ethical campaigns” and “deception.”

Other forms of artificial intelligence have long been used as a function of political campaigns, such as using data and algorithms to target voters on social media and automate tasks such as tracking donors. . Campaign strategists and tech entrepreneurs are hoping the latest innovations will bring some positives in 2024 as well.

Mike Nellis, CEO of forward-thinking digital agency Authentic, uses ChatGPT “every day,” and content created with the tool is later reviewed by humans. He said he encourages employees to use it whenever possible.

Nellis’ latest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. Create, send, and assess the effectiveness of fundraising emails. All of these are usually tedious tasks in campaigns.

“The idea is that every Democratic strategist, every Democratic candidate has a co-pilot in their pocket,” he said.

Mr. Swenson reported from New York.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *