
The images in the Republican National Committee ad for President Joe Biden feature images generated by artificial intelligence. The pervasiveness of AI-generated images, videos, and audio is a challenge for policy makers.
Republican National Committee
This week, the Republican National Committee used artificial intelligence to create a 30-second ad that imagines what a second term for President Joe Biden might look like.
From China’s invasion of Taiwan to the shutdown of the city of San Francisco, it depicts a series of fictional crises with fake images and news reports. A small disclaimer in the top left says this video was “made with AI images”.
This ad is just the latest example of AI blurring the lines between real and fictional. In the past few weeks, fake images of former President Donald Trump fighting police have gone viral. So were his AI-generated images of Pope Francis in a stylish puffy coat and fake songs using the voices of the clones of Pop His star Drake and The Weeknd.
Artificial intelligence is rapidly improving in its ability to mimic reality, raising big questions about how to regulate it. He also admits that even experts are baffled as technology companies allow anyone to create fake images, synthetic audio and video, and compelling human-sounding text.
Public media is independent media supported by the community for the public good.
Eileen Solaiman, safety and policy expert at AI company Hugging Face, said:
Solaiman is focused on making AI work better for everyone. This includes a lot about how these technologies can be misused to generate political propaganda, manipulate elections, create fake histories and videos of things that never happened. It also includes thinking about
Some of those risks are already here. For several years, her AI has been used to unknowingly digitally insert women’s faces into porn videos. These deepfakes can target celebrities or be used for revenge against ordinary citizens.
It emphasizes that the risks posed by AI relate not only to what the technology can do, but also to how we, as a society, respond to these tools.
“One of my biggest frustrations screaming from the top of my field is that many of the problems we see in AI are not engineering problems,” Solaiman said.
Technological solutions struggling to keep up
There is no silver bullet to distinguish between AI-generated content and human-generated content.
Technical solutions exist, such as software that can detect AI output and AI tools that watermark the images and text that are generated.
Another approach is by content provenance with clunky names. The goal is to clarify where digital media (both real and synthetic) come from.
Jeff McGregor, CEO of Truepic, which works to verify digital content, said his goal is to “make it easier for people to identify what kind of content this is.” “Was it made by a human? Was it made by a computer? When was it made? Where was it made?”
However, all of these technical responses have drawbacks. There is still no universal standard for identifying genuine or fake content. Detectors cannot catch everything and must be constantly updated as AI technology advances. Open source AI models may not contain watermarks.
That’s why people working on AI policy and safety say different responses are needed.
Matthew Ferraro, an attorney at WilmerHale and an expert on AI legal issues, says law and regulation must play a role, at least in some of the most risky areas.
“That would probably be non-consensual deepfake porn, or deepfakes of election candidates or state election officials in very specific circumstances,” he said.
Ten states have already banned deepfakes, mostly pornography. Texas and California have laws banning deepfakes targeting presidential candidates.
In some cases, copyright law is also an option. That’s what Drake and The Weeknd’s label, Universal Music Group, has called to pull songs from his platform that stream impersonating their voices.
When it comes to regulation, the Biden administration and Congress have indicated their willingness to do something. However, like other issues of technology policy, the European Union is leading the AI law, a set of rules aimed at putting guardrails on how AI is used.
But tech companies are already making their AI tools available to billions of people, embedding them in the apps and software many of us use every day.
So, for better or worse, sorting out facts from AI fiction requires people to be more savvy consumers of media, but that doesn’t mean reinventing the wheel. Propaganda, medical misinformation, and false election claims are pre-AI problems.
Arvind Narayanan, Professor of Computer Science at Princeton University, said:
This includes what Solaiman calls “people literacy,” such as fact-checking and asking yourself if what you see is corroborated.
“Anything that could have a big impact on your life or the democratic process, get the facts,” she said.
Copyright 2023 NPR. For more information, please visit https://www.npr.org.
