Fake videos surge during midterm elections, Republicans release James Talarico’s AI deepfake

AI Video & Visuals


Senate Republicans this week released an online ad created using artificial intelligence that appears to show a real-looking but fake Democratic candidate speaking directly to camera for more than a minute.

The National Republican Senatorial Committee’s deepfake of Texas Senate Democratic candidate James Talarico is just the latest in a series of AI-generated creations by national Republican campaign organizations over the past year. But this is the first time we’ve featured a fake version of a candidate speaking in a believable manner for this long, an example of how far AI technology has come in a short period of time, and an indication of where attack ads may be headed.

“The faces and voices are very good. There is a slight shift between the audio and the video, but other than that it looks very realistic and I don’t think most people would immediately notice that it’s fake,” Hany Farid, a professor at the University of California, Berkeley who specializes in digital forensics, said in an email.

The use of AI deepfakes in campaign advertising raises many ethical questions. There have also been bipartisan calls for federal legislation and regulation of the practice, although these ideas have also faced pushback on First Amendment grounds.

The 85-second ad depicts the AI-generated Talarico proudly reading excerpts of a 2021 tweet in which the real Talarico spoke about transgender issues, race, and religion, as well as a 2013 tweet in which she recalled attending a family planning event as a teenager. Additionally, the ad depicts the fake Talarico making new self-congratulatory comments praising past tweets by “saying” things like “Oh, this is so moving” and “Oh, I love this too,” despite there being no evidence that the real Talarico actually said the words.

The ad begins and ends with what the narrator describes as a “dramatic reading,” and the “AI GENERATED” disclosure appears on screen for almost the entirety of the ad. However, the disclosure text is small, mostly faint, and limited to the bottom corner of the screen. The fake “Talarico”, wearing a blazer and open-collared shirt, looks eerily similar to the real candidate.

A person familiar with the NRSC’s thinking said AI is a “consistently effective” way to highlight what opposing candidates are saying, adding: “These are Talarico’s real words…All we’ve done is visualize it for voters using modern tools, within all legal and ethical conditions.” However, the source said he would not comment on the additional “Talarico” commentary that the ad appears to have fabricated.

NRSC communications director Joanna Rodriguez claimed in an email that Democrats are “panicking after hearing James Talarico’s own words.” Talarico campaign spokesperson JT Ennis asserted in a text message that the candidates in the ongoing Republican primary are “afraid of James Talarico,” adding, “While they spend their time creating deepfake AI videos that mislead Texans, we are uniting Texans to win in November.”

Texas has one of the strictest state laws in the country regarding political deepfakes, but they only apply in the month before an election. A Texas bill passed in 2019 would make it a criminal misdemeanor to create a deepfake video and publish or distribute it within 30 days of an election if it is “created with the intent to deceive” and intended to harm a candidate or influence the outcome, punishable by up to one year in prison.

Voting for the 2026 midterm elections will be in early November, but the Republican primary runoff will be held in late May. And while about half of states have passed laws related to campaign deepfakes, many other states only require disclosure when ads are created with AI. Democratic Sen. Andy Kim of New Jersey called for national action on the anti-Talarico ad, writing on X: “These deepfakes are dangerous and wrong. We need protection for all Americans who may be targeted, not just politics.”

The words “AI GENERATED” appear in small letters in the lower right corner of the anti-Talarico ad, above the NRSC logo, for about three seconds after it starts. Then, as the fake “Talarico” speaks, the words “AI GENERATED” appear in even fainter and smaller letters in the same corner and remain on the screen for over a minute. A slightly larger and darker disclosure text reappears for the last 5 seconds of the ad.

In some cases, the use of AI by political campaigns to date has not included any disclosure. Examples include the 2023 Republican presidential campaign of Florida Governor Ron DeSantis, who posted a fake image of President Donald Trump hugging Dr. Anthony Fauci mixed in with a real image, and the 2024 robocall scandal, in which a consultant to Representative Dean Phillips’ Democratic presidential campaign hired someone to create an AI version of President Joe Biden’s voice promoting New Hampshire. Voters shall not vote in the primary election.

Sara Krebs, a professor at Cornell University and director of the Technology Policy Institute, said the disclosures in Talarico’s ads “reflect the direction in which technology and the norms surrounding it seem to be moving.”

“Campaigns seem to be starting to treat synthetic media not as something secret, perhaps because it backfired as being dishonest and deceptive, and not qualities desired in an elected official, but as something that can be used openly as long as it lets viewers know what they are seeing,” Krebs said in an email.

However, it is debatable whether this small print disclosure was truly open.

“I think the faint little font in the bottom right corner is far from a proper disclosure, because the average person scrolling through X/YouTube simply doesn’t notice it. In fact, I didn’t notice it when I first watched the video,” Farid said. “I also think that if (Talarico’s) tweets were genuine, they could reasonably be classified as deceptive when you see candidates reading them differently. And I don’t think campaigns or candidates should open this Pandora’s box.”

Rapid advances in AI technology have made fake videos more convincing and easier than ever to create, leading to a surge in AI hoaxes during the 2026 mid-cycle.

Texas is a prime example. As The Texas Tribune reported last month, AI videos and images were used in multiple ads and social media posts during the controversial Republican Senate primary race involving Sen. John Cornyn and Texas Attorney General Ken Paxton.

The Paxton campaign’s attack ads featured a fake “Cornyn” dancing happily with Democratic Rep. Jasmine Crockett. A small piece of text at the end of the ad reveals that “certain videos” are AI-generated “satire that does not represent real events.” Meanwhile, an ad from Mr. Cornyn’s campaign showed a fake clip of failed Republican candidate U.S. Rep. Wesley Hunt holding a Pomeranian in a fake scene, depicting Mr. Hunt as merely a “show dog.” The ad did not include any disclosures about AI.

Various Democratic politicians are also using AI. California Governor Gavin Newsom posted a fictitious video of President Trump and senior government officials crying while handcuffed, as well as other AI content, much of which was clearly fabricated and satirical. Mr. Crockett’s unsuccessful Senate primary campaign did not directly respond to Texas media’s questions about whether the striking ad image of Mr. Crockett standing with a large crowd was generated by AI.

The NRSC, like other agencies, sees little downside to deepfakes in advertising. Even if some viewers express anger or news outlets report a fake story, the ad, and the message it is trying to emphasize, will receive more attention. Krebs said synthetic media is “likely to become a routine election tool” for both parties.

“What we’re probably seeing is a kind of competitive boundary-pushing, where one campaign proves a tactic and other campaigns adopt it without risking detriment,” she says.





Source link