Georgia Rep. Mike Collins’ campaign uses AI-generated deepfake of Sen. Jon Ossoff in close Senate race

Applications of AI


Nasty slurs, disrespectful insults, and threatening mudslinging have been a part of American politics for centuries. In this new era of relentless tactics, attacks are more brutal and advertising will do whatever it takes to stay ahead of the competition, even if the attacks are artificial.

A new political ad in Georgia’s Senate race is raising concerns about the use of artificial intelligence in elections after U.S. Rep. Mike Collins’ campaign released a deepfake video showing Sen. Jon Ossoff mocking farmers and defending the government shutdown.

Ossoff never said that.

The video, posted on social media last week, was created using artificial intelligence and includes computer-generated audio of Ossoff supporting the shutdown and claiming he has “only seen farms on Instagram.” The ad includes a small on-screen disclaimer that says “Video is generated by AI” to avoid violating Georgia or federal law.

The ad sparked a broader discussion about the responsible use of emerging technology in political persuasion and what happens when the line between truth and fabrication blurs.

Campaigns collide over advertising

The Georgia Republican Party defended the video as legitimate political satire.

Josh McCoon, chairman of the Georgia Republican Party, said his party “supports the creative use of cutting-edge technology” and accused Democrats of being “afraid of losing power.”

“This ad was clearly labeled as AI-generated satire,” McCune told CBS News Atlanta in a statement. “Claims that this is a deepfake intended to deceive are just the latest desperate story.”

Ossoff’s campaign disagreed, calling the ad a deliberate attempt to mislead voters. Incumbent candidates have vowed not to use this technology during this campaign.

“The only reason a candidate would need to use deepfakes to fabricate their opponent’s words is if they don’t think they can win on their own. Georgians don’t like people who lie to them,” an Ossoff campaign spokesperson said in a statement.

Collins’ campaign said it plans to continue using AI tools.

“As technology advances and creates new opportunities to reach and communicate with voters, the Collins campaign will be at the forefront of adopting new tactics and strategies that go beyond traditional media coverage and deliver messages directly to voters,” the campaign said in a statement.

The local Democratic Party strongly criticized the move. “Instead of trying to fool voters with deepfake videos, the dishonest Rep. Mike Collins should explain why he supports doubling Georgians’ health insurance premiums,” said Devon Cruz, senior communications adviser for the Georgia Democratic Party.

Untitled Design-8.png

Rep. Mike Collins is one of several Republicans challenging Sen. Jon Ossoff for the seat in next year’s midterm elections.

Experts warn the public may have trouble determining what is true

Dr. Patrick Dix, an AI and automation researcher and expert, told CBS News Atlanta that deepfakes are no longer a threat of the future. They are here now, and campaigns are already deploying them.

“What we’re seeing now is that in 2017, everyone said they were coming back,” Dix said. “You can create audio that sounds exactly like your opponent. Once people hear something, they assume it’s true.”

Dix said the danger is simple: deepfakes can trick voters into believing something a candidate has said or done that never happened.

“You can sway voters with things that aren’t true,” he said. “People could upvote or downvote someone based on a completely fabricated video.”

And since most AI-generated media leaves no traceable origin, anyone can create and share a deepfake, and campaigns may not even know where it originated, he said.

Deepfake regulations remain limited nationwide

Georgia lawmakers have moved to ban deceptive AI campaign materials. The bill, commonly referred to as SB9, was introduced in 2025 and would make it a crime to knowingly release certain AI-generated campaign materials within 90 days of an election without required disclosure. This measure carries penalties, which can be severe depending on intent and repeat offenses.

But Georgia is still in the early stages of states trying to regulate this technology. In 2019, several states began passing legislation to combat deceptive political deepfakes, particularly related to elections.

According to the National Conference of State Legislatures, in 2025 at least half of U.S. states will have enacted laws specifically addressing deepfakes in elections, nonconsensual sexual images, and identity theft. Last year, all 50 states, Washington DC, Puerto Rico, and the Virgin Islands considered AI-related legislation. Additionally, 38 states have enacted approximately 100 AI measures, with more expected by 2026.

Dix said the rapid rise of generative AI will require states to have stricter rules, such as permanent watermarks and clear disclosures, to prevent widespread confusion.

“If a campaign is going to use AI, they should tell voters up front,” he said. “Right now, people don’t know what the truth is, and when they’re confused, they might not vote at all.”

signs of things to come

Collins’ campaign ad is one example. Without clear and enforceable rules, more campaigns and external parties are likely to use it. AI generated media To influence voters.

“Eventually, all candidates will be victims of this,” Dr. Dix said. “Without strong regulation, no one is protected from identity theft.”

A 2023 poll by the AP-NORC Center for Public Affairs Research and the University of Chicago Harris School of Public Policy found that nearly six in 10 adults (58%) believe that AI tools that can finely target voters, churn out persuasive content, and generate realistically fake images and videos in seconds will increase the spread of false and misleading information during presidential elections. In contrast, 6% said AI would reduce misinformation, and about a third said it would have little impact.

AI-generated political content is now widely available online ahead of the 2024 election. President Donald Trump shared an AI-manipulated video with his followers, including a distorted clip of CNN host Anderson Cooper’s Truth on Social show. The Republican National Committee released a Digital Warp ad depicting former President Joe Biden and the words, “What happens when the weakest president ever gets re-elected?” Shortly after Biden announced he would seek re-election.

Other examples include a doctored video of Biden appearing to attack transgender people, an AI-generated image that falsely showed children learning about Satanism in a library, and a manipulated image of Trump’s mugshot before it was officially released. While some creators acknowledged that the AI ​​content was missourced, its viral spread highlights the challenges voters face.

Georgia’s 2026 Senate race shows how quickly artificial intelligence can change the rules of campaigning. Voters may soon be forced to navigate a digital minefield of manipulated images and sounds, deciding not only who to support but also what to believe.



Source link