TALLAHACESE — The use of artificial intelligence to generate images, text and voice could “muddy” political campaigns and deepen voter mistrust, communications experts say.
Generative artificial intelligence (AI) allows users to enter prompts, resulting in content that can express almost anything the user wants. With the 2024 election looming, a political communications expert is preparing to see AI-generated images of him appear more frequently in election ads.
“You’re talking about opposition messages, which can be created with a single touch. This prompt returns information so quickly that when the election cycle starts to kick in, we’ll be bombarded with information.” will be,” Janet Coates, managing director of the University of Florida’s Media and Technology Trust Consortium, told the Florida News Service. Recent interview.
The ability to manipulate images and sounds is “moving to a whole different level of sophistication,” Coates said.
“We’ve been down this road for a long time,” Coates said. “One of her first visual advertisements now considered deceptive was her 1964, very famous (attack ad), Girl with Daisies, and Mushroom Cloud superimposed over her on screen. The (Lyndon B.) Johnson camp was at odds with Barry Goldwater.”
In the past, “when these manipulations were taking place, we knew there were people manipulating information,” she added.
The issue erupted in early June for the two major Florida politicians on a collision course in the 2024 Republican presidential primary. A Twitter account belonging to Governor Ron DeSantis’ presidential campaign contains multiple AI-generated photos of former President Donald Trump embracing Anthony Fauci, President Trump’s former chief medical adviser, who spearheaded the administration’s pandemic response. Tweeted the video.
As Trump and DeSantis battle over their respective approaches to the COVID-19 pandemic, the DeSantis campaign sought to portray a comfortable relationship between Trump and Fauci.
Posts containing videos were accompanied by Twitter “community notes,” which the social media platform said were intended to allow users to “cooperate and add context to potentially misleading tweets.” is the purpose.
“Three stills of Trump hugging Fauci are AI-generated images. The rest of the recordings and images in the ad are real,” the notice said.
The New Zealand National Party’s use of AI-generated imagery in multiple political ads made international headlines in May. Also in May, an ad attack against President Joe Biden’s re-election campaign was run by the Republican National Committee, using AI-generated imagery to depict a vision of a grim future for Biden in his second term.
The video will be posted to the Republican Party’s official YouTube channel with a description that explicitly informs viewers that it incorporates AI imagery.
The description reads, “An AI-generated survey of the country’s possible future if Joe Biden is re-elected in 2024.”
But not all AI images are so easily identified.
Coates pointed out that AI-generated images can be easily created by anyone with an internet connection, and that their sources are potentially difficult to identify.
“The barriers to entry are low to do it. You don’t have to contract big expensive tools. The tools are readily available. You don’t need any particular expertise to use them. The more sophisticated, the higher the quality of the output, but that’s not rocket science,” Coates said.
Steve Vancore, a longtime political consultant and pollster, believes that generative AI could become commonplace at a time when the amount of political ads and other communications placed in front of voters is steadily increasing. said to be sexual.
“From a larger perspective, what is alarming is that the public already has an inherent mistrust of political communication. ,” Vancore told the news service.
As the volume of political advertising increases, so too will the use of AI-generated images, audio, and text.
“Too many are at risk and the people running these campaigns will take advantage of it just to raise more money and use more money. It will be an ill-fated arms race that will further fuel public mistrust,” Vancore said.
Vancor, who has worked on more than 250 campaigns in a career spanning decades, said advising candidates on using generative AI technology in advertising depends on how it is used. .
“My criteria for political attack ads or negative ads is whether it’s true, whether it’s verifiable and whether it’s relevant,” Vancore said.
Vancore gave the example of a candidate using ChatGPT, a text-generating AI tool, to create emails for voters.
“To say, ‘Hey, I want you to send me a series of emails talking about my program of after-school counseling for children. rice field. “An unacceptable use of artificial intelligence is ‘I want you to generate an image of the other party playing with an underage girl.'”
Whether AI-generated ads can undermine a candidate’s trust also depends on how they’re used, Bancore said, adding that other uses of the technology could be more sophisticated. .
“One of the things people say about Joe Biden is that he’s getting old. Maybe it’s not an unfair rap. The most powerful person on earth, or one of them, is aging.” “Isn’t that a legitimate concern? What if the Joe Biden campaign subtly rejuvenated him a little?
Mr Vancor said Mr Trump “has the same problem”.
“I know he (Trump) is aging, but that probably has a lot to do with what’s going on in his life. Will the media pick it up and it will bounce back?” he said.
Coates also noted that AI could be used to “clean up” images of candidates.
“Not only can they create offensive ads and disinformation about their adversaries, but they can muddy the waters in an attempt to cleanse themselves. I don’t think we even thought about whether we could deploy it,” Coates said.
According to Jay Mierowski, an associate professor of public relations at the University of Florida, potential recipients of ads with AI-generated images don’t need to use new methods to counter attacks.
Candidates, for example, could use programs designed to detect the use of AI-generated images, according to Chmielovski, who specializes in political communications.
“With that, you see, we’ve run it with this detector, and this clearly shows that this is not what our candidates are saying. ” Additionally, here is the actual video of what happened at the event: ‘ he said. “So you’re going to do what you’ve always done. Respond with what actually happened, here’s the facts to this. I hope it gets through to those who want to listen,” he said.
By Ryan Daley
