
In New York, a video that uses an artificial intelligence voice-clone tool to impersonate Vice President Kamala Harris and misrepresent statements she did not make has sparked concern with Election Day approaching in three months. The video, which gained a lot of attention when tech billionaire Elon Musk shared it on his social media platform X, was initially released as a parody, although Musk's post did not clearly indicate that it was a parody.
By late Sunday, Musk clarified that the video was intended as satire, pinning the original poster's post to his profile and emphasizing that puns and parodies are not crimes. The doctored video closely resembles an actual campaign ad for Harris, a leading Democratic presidential candidate, but Harris' real voice has been replaced with a more convincing AI-generated one.
“I, Kamala Harris, am the Democratic presidential candidate because Joe Biden finally showed his age in the debates,” the AI voice in the video boldly declares, before going on to criticize her for her gender and ethnicity, calling her a “diversity hire” and implying she is unfit to lead the country. The fake ad maintains the “President Harris” brand and mixes in authentic footage of Harris from her past.
“We believe what the American people want is the real freedom, opportunity and security that Vice President Harris is providing, not the false, manipulated lies of Elon Musk and Donald Trump,” Mia Ellenberg, a spokeswoman for the Harris campaign, said in an email to The Associated Press.
The video is a prime example of how lifelike AI-generated content is being used to blur the lines between humor and misinformation in politics as the US presidential election approaches, and a stark reminder that high-quality AI tools are becoming increasingly accessible, yet there are significant gaps in federal regulatory efforts to govern their use, leaving most oversight to individual states and social media platforms.
The incident has also sparked a debate about the appropriate handling of AI-manipulated content, especially when it toes the line between satire and misinformation. The video's creator, a YouTuber known as Mr. Reagan, initially classified it as a parody on both YouTube and X. But Musk's initial post, which has had 130 million views according to the platform, was only accompanied by the comment “this is awesome” and a laughing emoji.
Throughout the weekend, participants in X's “Community Notes” feature suggested Musk's post be labeled as manipulated, but this was not done, even though Musk later clarified the video's satirical intent.
The incident sparked debate over whether Musk's original share violated X's policy, which bans the sharing of synthetic, manipulated or out-of-context media that could mislead or hurt people, though exceptions are made for memes and satire, as long as they don't cause significant confusion about the media's veracity.
The man behind the Reagan persona, Chris Coles, acknowledged using AI to create the fake ads in a YouTube video on Monday, arguing that they were clearly parodies regardless of labeling. Musk, who endorsed former President Donald Trump earlier this month, did not respond to a request for comment.
After verifying the audio of the fake ads, AI-generated media experts confirmed that most of them were generated using AI technology. Hany Farid, a digital forensic expert at the University of California, Berkeley, pointed out that while most people would not believe that the voice was really Harris's, the voice imitation strengthened the impact of the video, emphasizing the effectiveness of generative AI. Farid stressed that companies that offer AI voice cloning tools need to ensure that their services do not harm individuals or democracy.
Rob Weissman, co-executive director of the advocacy group Public Citizen, took a different stance, suggesting the quality of the video could be misleading to many viewers: He argued that because the video reinforces an existing narrative about Harris, many people are likely to perceive it as authentic.
Weissman, whose group lobbies for stricter regulation of generative AI by Congress, federal agencies and state governments, described the video as a warning about the dangers that unregulated AI poses to politics.
Similar instances of AI-generated deepfakes have occurred around the world, with the intention of swaying voters' opinions through misinformation or humor. For example, in Slovakia in 2023, fake audio impersonated a candidate and suggested a plot to manipulate the election. In the United States, a Louisiana political action committee used AI to create satirical ads, superimposing the face of a mayoral candidate onto that of an actor, portraying him as an underachieving student.
Despite these examples, the U.S. Congress has not enacted legislation specifically about AI in politics, leaving regulatory efforts largely to individual states. More than a third of U.S. states have laws regulating the use of AI in campaigns and elections, according to the National Conference of State Legislatures.
Besides X, other social media platforms have also introduced policies around synthetic and manipulated media, for example, YouTube users must disclose that they use generative AI in their videos or risk having their accounts suspended.