new york –
The release of a doctored video that makes US Vice President Kamala Harris sound like she is saying things she is not is raising concerns about the power of artificial intelligence to mislead the public with just three months to go until Election Day.
The video gained attention after tech billionaire Elon Musk shared it on his social media platform “X” on Friday night, without specifying that it was originally published as a parody.
The video uses much of the same footage as the actual ad that Harris, a leading Democratic presidential candidate, released last week as she launched her campaign, but the narration voice in this one has been replaced with a different voice that mimics Harris's.
“I, Kamala Harris, am the Democratic candidate for president because Joe Biden finally showed his age in the debate,” a voice says in the video. The voice claims Harris is a “diversity hire” because she is a woman and a person of color, and says she “knows nothing about running a country.” The video keeps the “Harris for President” branding intact, and adds some authentic historical footage of Harris.
“We believe what the American people want is the real freedom, opportunity and security that Vice President Harris is providing, not the false, manipulated lies of Elon Musk and Donald Trump,” Mia Ellenberg, a spokeswoman for the Harris campaign, said in an email to The Associated Press.
The widely shared video is one example of how lifelike AI-generated images, videos, and audio clips have been used to ridicule and mislead politicians as the US presidential election approaches. The video highlights how, even as high-quality AI tools become much more accessible, there has so far been a lack of notable federal action to regulate their use, leaving the rules guiding AI in politics largely to states and social media platforms.
The video also raises questions about how to best handle content where the lines of appropriate use of AI are blurred, particularly content that falls into the category of satire.
The original user who posted the video, a YouTuber known as Mr. Reagan, clarified on both YouTube and X that the doctored video was a parody. But Musk's post, which has been viewed more than 123 million times according to the platform, simply includes the caption “This is awesome” and a laughing emoji.
X users familiar with the platform may know that they can click on Musk's post to be taken to the original user's post and view the disclosure, though there is no instruction to do so in Musk's caption.
Some participants in X's “Community Notes” feature, which adds context to posts, suggested Musk's post be labeled, but no such label had been added as of Sunday afternoon. Some online users questioned whether Musk's post violated X's policies, which state that users “may not share composite, manipulated, or out-of-context media that may deceive, confuse, or harm people.”
The policy makes an exception for memes and satire, as long as they don't cause “significant confusion about the veracity of the media.”
Musk endorsed Republican candidate and former President Donald Trump earlier this month. Neither Reagan nor Musk immediately responded to emailed requests for comment Sunday.
Two experts specializing in AI-generated media examined the audio from the fake ads and found that many of them were generated using AI technology.
One of them, Hany Farid, a digital forensics expert at the University of California, Berkeley, said the video shows the power of generative AI and deepfakes.
“The AI-generated audio is very good,” he said in an email. “Most people would not believe it is VP Harris' voice, but having the words in her voice makes the video that much more powerful.”
He said generative AI companies that offer voice cloning tools and other AI tools to the public should do more to ensure their services are not used in ways that harm people and democracy.
Rob Weissman, co-executive director of the advocacy group Public Citizen, disagreed with Farid and said many people would be fooled by the video.
“I think this is obviously not a joke,” Wiseman said in an interview. “I think most people who see this wouldn't think it was a joke. It's not good, but it's good enough. And because this plays into the existing themes surrounding her, most people are going to believe this is real.”
Weissman, whose group has advocated for Congress, federal agencies and state governments to regulate generative AI, said the video was “the kind of thing we've been warning about.”
AI deepfakes generated in the United States and elsewhere sought to influence voters with misinformation, humor, or both: in Slovakia in 2023, fake audio clips impersonated candidates discussing plans to rig the election and raise beer prices days before the vote; in Louisiana in 2022, a satirical political action committee ad superimposed the face of a Louisiana mayoral candidate onto that of an actor playing a struggling high school student.
Congress has yet to pass any laws regarding AI in politics, federal agencies have taken only limited action, and most of the existing regulations in the U.S. are left to the states. More than a third of states have enacted their own laws regulating the use of AI in campaigns and elections, according to the National Conference of State Legislatures.
Besides X, other social media companies are also creating policies around synthetic or manipulated media shared on their platforms. For example, users of video platform YouTube must disclose whether they have used generative artificial intelligence to create their videos or risk having their accounts suspended.