A doctored video that makes Vice President Kamala Harris sound like she is saying things she is not is raising concerns about the power of artificial intelligence to mislead people with Election Day just three months away. The video gained attention Friday evening when tech billionaire Elon Musk shared it on his social media platform “X” without disclosing that it was originally published as a parody.
Quick Read
- Photoshopped video shared by Musk mimics Harris' voice, raising concerns about AI in politics
- From the Associated Press
- Date: July 28, 2024
- A doctored video shared by Elon Musk impersonating Vice President Kamala Harris' voice has raised concerns that artificial intelligence could mislead voters with Election Day just three months away.
- The video, which closely resembles an actual Harris campaign ad, replaces the original narration with an AI-generated voice that makes false statements that sound very similar to Harris's.
- Mia Ellenberg, a spokeswoman for the Harris campaign, denounced the video as a “false, manipulated lie” and accused Musk and Trump of spreading misinformation.
- The video, originally published as a parody by YouTuber Mr. Reagan, highlights the challenges posed by AI-generated content in the political sphere.
- Experts confirmed that AI technology was used to power the audio in the video, highlighting the power and potential harm of generative AI and deepfakes.
- The case highlights the lack of federal regulation of AI in politics, forcing states and social media platforms to make their own rules.
- The video also raises questions about the ethical use of AI, especially as its content blurs the line between satire and deception.
- Social media platforms, including X (formerly Twitter), have policies regarding manipulated media, but enforcement and transparency remain issues.
- The controversy illustrates broader concerns about the role AI plays in shaping public opinion and the urgent need for regulatory action.
The Associated Press reports:
Photoshopped video shared by Musk mimics Harris' voice, raising concerns about AI in politics
NEWSLOOKS – NEW YORK (AP) —
A doctored video that makes Vice President Kamala Harris sound like she is saying things she is not is raising concerns about the power of artificial intelligence to mislead people with Election Day just three months away. The video gained attention Friday evening when tech billionaire Elon Musk shared it on his social media platform “X” without disclosing that it was originally published as a parody.
The video uses much of the same footage as the actual ad that Harris, a leading Democratic presidential candidate, launched last week as she launched her campaign. But the video replaces the narration with a different voice that sounds a lot like Harris. “I, Kamala Harris, am your Democratic candidate for president because Joe Biden finally showed his dementia in the debate,” the voice says in the video. It also claims that Harris is a “diversity hire” because she is a woman and a person of color, and that she “doesn't even know the basics about running a country.” The video keeps the “Harris for President” branding intact. It also adds some authentic historical footage of Harris.
“We believe that what the American people want is real freedom, opportunity and security that VP Harris provides, not the false, manipulated lies of Elon Musk and Donald Trump,” Harris campaign spokeswoman Mia Ellenberg said in an email to The Associated Press. The widely shared video is just one example of how lifelike AI-generated images, videos and audio clips have been used to mock and mislead people about politics as the presidential election approaches. While high-quality AI tools have become much more accessible, the lack of notable federal action to regulate their use highlights the fact that rules guiding AI in politics are largely left to state governments and social media platforms.
The video also raises questions about how best to handle content that blurs the lines of what counts as appropriate use of AI, especially when it falls into the category of satire. The original user who posted the video, a YouTuber known as Mr. Reagan, made it clear on both YouTube and X that the doctored video was a parody. But Musk's post, which has been viewed more than 123 million times according to the platform, only has the caption “this is awesome” and a laughing emoji. X users familiar with the platform may know that they can click from Musk's post to the original user's post, where the disclosure is written; Musk's caption does not instruct them to do so.
Some participants in X's “Community Notes” feature, which adds context to posts, have suggested Musk's post be labeled, but as of Sunday afternoon no such label had been added. Some users online have questioned whether Musk's post violates X's policies, which state that users “may not share synthetic, manipulated, or out-of-context media that may deceive, confuse, or harm anyone.” The policy makes exceptions for memes and satire, as long as they don't cause “significant confusion about the veracity of the media.”
Earlier this month, Musk endorsed Republican candidate and former President Donald Trump. Neither Reagan nor Musk immediately responded to an emailed request for comment Sunday. Two experts who specialize in AI-generated media reviewed the audio in the fake ad and found that much of it was generated using AI technology. One of them, Hany Farid, a digital forensics expert at the University of California, Berkeley, said the video shows the power of generative AI and deepfakes. “The AI-generated audio is very good,” he said in an email. “Most people would not believe it's VP Harris' voice, but having the words in her voice makes this video that much more powerful.” He said generative AI companies that offer voice cloning tools and other AI tools to the public should do more to ensure their services aren't used in ways that could harm people or democracy.
Rob Weissman, co-director of the advocacy group Public Citizen, disagreed with Farid and said many people would be fooled by the video. “I don't think it's a joke, obviously,” Weissman said in an interview. “Most people who see this won't think it's a joke. It's not good, but it's good enough. And it relates to the existing themes surrounding her, so most people will believe it's real.” Weissman, whose group has urged Congress, federal agencies and state governments to regulate generative AI, said the video “is exactly the kind of thing we've been warning about.”
Other AI deepfakes generated in the United States and elsewhere sought to influence voters with misinformation, humor, or both. In Slovakia in 2023, a fake audio clip impersonated a candidate, discussing plans to rig the election and raise beer prices days before the vote. In Louisiana in 2022, a satirical political action committee ad superimposed the face of a Louisiana mayoral candidate onto that of an actor playing a struggling high school student.
Congress has yet to pass any laws regarding AI in politics, federal agencies have taken only limited steps, and most of the existing regulations in the United States are left to the states. According to the National Conference of State Legislatures, more than one-third of states have enacted their own laws regulating the use of AI in campaigns and elections. Besides X, other social media companies have also enacted policies regarding synthetic and manipulated media shared on their platforms. For example, users of the video platform YouTube will have their accounts suspended if they do not disclose whether they used generative artificial intelligence to create their videos.
Read more politics news