AI poses a 2024 political crisis with the threat of misleading voters

AI Video & Visuals


The composite images that emerged were often crude, unconvincing, and expensive to produce. Especially when other kinds of misinformation are very cheap and easily spread on social media. The threat posed by AI and so-called deepfakes has always seemed a year or two away.

no more.

Sophisticated generative AI tools can now create human voice clones and surreal images, video and audio in seconds at minimal cost. Tied to powerful social media algorithms, this fake digitally-generated content can spread rapidly across a wide area, potentially targeting very specific audiences, pushing campaigns’ dirty tricks to new lows. can be raised to

The implications for the 2024 campaign and election are both alarming and significant. Generative AI can not only quickly create targeted campaign emails, texts and videos, but also to mislead voters, impersonate candidates, and undermine elections at scale and pervasive. may be used. Never before seen speed.

“We are not ready for this,” warned AJ Nash, vice president of intelligence at cybersecurity firm Zerofox. “For me, the big breakthrough is that audio and video features have come along.

AI experts have quickly identified a number of alarming scenarios in which generative AI is used to create synthetic media to confuse voters, defame candidates, or incite violence. can be resolved to

An automated robocall message in the candidate’s voice telling voters to vote on the wrong day. Audio recordings of candidates who have allegedly confessed to a crime or expressed racist views. Video footage of someone giving a speech or interview they have never given. Fake images masquerading as local news reports, pretending that a candidate has dropped out of the race.

“What if Elon Musk called you personally and told you to vote for a particular candidate?” Oren Etzioni, founder of the for-profit organization AI2, said: “Many people will listen. But it’s not him.”

A dystopian campaign ad unveiled last month by the Republican National Committee offers a glimpse into this digitally manipulated future. The online ad, which was posted after President Joe Biden announced his re-election campaign, features a bizarre, slightly distorted image of Mr. Biden and the text “What if the weakest president ever gets re-elected?” start with.

Here’s a series of AI-generated images: Taiwan under attack. In the United States, storefronts are boarded up as the economy collapses. Soldiers and armored military vehicles patrol the local streets as waves of tattooed criminals and immigrants cause panic.

“AI-generated speculations about the future of this country that could happen if Joe Biden is re-elected in 2024,” the RNC reads the ad description.

Petko Stoyanov, global chief technology officer of Austin, Texas-based cybersecurity firm Forcepoint, said the RNC allowed its use of AI, but not others, including nefarious political campaigns and foreign adversaries. said no. Stoyanov predicted that groups seeking to interfere in American democracy would use AI and synthetic media as a means to undermine trust.

“What if a cybercriminal or an international organization such as a state impersonates someone? What is the impact? Is there a way?” Stoyanov said. “There will be more and more misinformation coming from international sources.”

AI-generated political disinformation goes from a modified video of Biden appearing to give a speech attacking transgender people to an AI-generated image of children believed to be learning Satanism in a library. Until now, it’s already circulating online ahead of the 2024 election.

Former President Trump was indicted in Manhattan Criminal Court for falsification of business records, and even though he did not take a photo of his face when he was indicted, there is also an AI image that seems to have taken a photo of President Trump’s face. deceived some social media users. Other AI-generated images, which showed President Trump resisting arrest, were quickly acknowledged by their creators.

Rep. Yvette Clark (New York) introduced a bill to the House of Representatives that would require candidates to be labeled in AI-generated election ads. He is also the proponent of a bill that would require anyone creating synthetic images to add a watermark to indicate that fact.

Some states have their own proposals to address concerns about deepfakes.

Clark said his biggest concern is that generative AI could be used to create videos and audio that incite violence and turn Americans against each other before the 2024 election.

“It’s important to keep up with technology,” Clark told The Associated Press. “We have to put up some guardrails. People can be fooled, but it only takes a fraction of a second. People are so busy with their own lives that they don’t have time to check all the information. Weaponizing AI could have a very devastating effect on the political season.”

Earlier this month, a political consultancy trade group in Washington condemned the use of deepfakes in political advertising, saying deepfakes “have no place in legitimate and ethical campaigns” and “deception.”

Other forms of artificial intelligence have long been used as a function of political campaigns, such as using data and algorithms to target voters on social media and automate tasks such as tracking donors. . Campaign strategists and tech entrepreneurs hope the latest innovations will bring some positives in 2024 as well.

Mike Nellis, CEO of forward-thinking digital agency Authentic, uses ChatGPT “every day,” saying that as long as content created with the tool is later reviewed by a human eye, employees will said it encourages them to use it as well.

Nellis’ latest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. Create, send, and assess the effectiveness of fundraising emails. All of these are usually tedious tasks in a campaign.

“The idea is that every Democratic strategist, every Democratic candidate has a co-pilot in their pocket,” he said.

___

Mr. Swenson reported from New York.

___

The Associated Press receives support from several private foundations to strengthen its commentary coverage of elections and democracy. Learn more about AP’s democracy initiative here. AP is solely responsible for all content.

___

See https://apnews.com/hub/misinformation for AP misinformation coverage and https://apnews.com/hub/artificial-intelligence for coverage on artificial intelligence.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *