The Trump administration’s use of AI-generated images, including cartoon-like visuals and memes, on official White House channels has sparked new alarm after civil rights lawyer Nekima Levi Armstrong released a realistically edited image of her crying after her arrest.
Homeland Security Secretary Kristi Noem’s account first posted the original arrest image, and then the official White House account shared an altered version depicting her crying.
The doctored photo joins a flood of AI-edited images that have been shared politically since the shooting deaths of Renee Good and Alex Preti by U.S. Border Patrol agents in Minneapolis.
Misinformation experts are troubled by the White House’s use of AI, and worry that these images undermine public trust and foster mistrust.
AI-enhanced or edited images are just the latest tool the White House is using to engage Trump’s base of supporters, who spend a lot of their time online. (AP Photo/Angelina Katsanis)
In response to criticism of Levy Armstrong’s edited image, White House officials doubled down on the post, with Deputy Communications Director Kaylan Dole writing to X that “the meme will continue.” White House deputy press secretary Abigail Jackson also shared a post mocking the criticism.
David Rand, a professor of information science at Cornell University, said calling the altered image a meme “certainly appears to be an attempt to cast it as a joke or a humorous post, similar to their previous cartoons. This is likely intended to protect them from criticism for posting manipulated media.” He said the purpose of sharing the altered arrest images appears to be “much more vague” than the cartoon-like images the administration has shared in the past.
Memes have always conveyed multi-layered messages that are entertaining and informative to those who understand them, but indecipherable to outsiders. Zach Henry, a Republican communications consultant who founded the influencer marketing firm Total Virality, said AI-enhanced and edited images are just the latest tool the White House is using to engage Trump’s base of voters, who spend a lot of their time online.
“Ultimately, people going online will see it and immediately recognize it as a meme,” he says. “Your grandparents may not understand the meme when they see it, but they will ask their children and grandchildren about it because it looks real.”
Henry generally praised the work of the White House social media team, saying all the better if it provokes a violent reaction and it helps spread the virus.
The creation and distribution of altered images, especially when shared by trusted sources, “reifies notions of what is happening rather than representing what is actually happening,” says Michael A. Spikes, a Northwestern University professor and news media literacy researcher.
“Government has to be a place where you can trust the information and say it’s accurate, because they have a responsibility to do that,” he said. “By sharing this kind of content, by creating this kind of content, trust is being eroded. I’m always skeptical of the word trust, but it’s eroding the trust that we should have in the federal government to provide us with accurate and verified information. This is a real loss and it’s really worrying.”
Mr Spikes said he was already aware of a “systemic crisis” around distrust of the press and higher education, and felt this behavior by official channels was fueling these problems.
Ramesh Srinivasan, a UCLA professor and host of the Utopias podcast, said many people are wondering where they can turn for “trustworthy information.” “AI systems will only exacerbate, amplify and accelerate the problem of lack of trust and inability to understand what is even considered reality, truth and evidence,” he said.
Srinivasan said he feels that the White House and other officials sharing AI-generated content not only encourages the public to continue posting similar content, but also gives permission to others in positions of credibility and power, such as policymakers, to share unlabeled synthetic content. He added that “we have an enormous challenge” given that social media platforms tend to “algorithmically privilege” extreme and conspiratorial content (which can easily be created with AI-generated tools).
Already, the influx of AI-generated videos related to Immigration and Customs Enforcement actions, protests, interactions with the public, etc. is rapidly increasing on social media. After Renee Goode was shot by an ICE agent while she was riding in her car, several AI-generated videos began circulating that showed the woman telling ICE agents to stop and then driving away. There are also numerous fake videos circulating of people confronting immigration raiders and ICE agents, often yelling at them or throwing food in their faces.
Jeremy Carrasco, a content creator who specializes in media literacy and debunking viral AI videos, said the majority of these videos likely come from accounts that are “engagement farming,” or trying to capitalize on clicks by generating content around popular keywords and search terms like ICE. But he also said the videos are being viewed by people opposed to ICE and DHS, who may be viewing them as “fan fiction” or doing “wishful thinking” in hopes of actual backlash against the organizations and their employees.
Still, Carrasco also believes that most viewers can’t tell whether what they’re watching is fake or not, and questions whether they’ll be able to tell “what’s real and what’s not when it really matters, like when the stakes are much higher.”
Even when there are obvious signs of AI generation, such as road signs with gibberish or other obvious mistakes, it is only in a “best case scenario” that the viewer is knowledgeable or observant enough to record the use of AI.
Of course, this issue is not limited to news surrounding immigration enforcement and protests. Fabricated and false media images of deposed Venezuelan leader Nicolas Maduro following his arrest exploded on the internet earlier this month. Carrasco and other experts believe the spread of AI-generated political content will become more common.
Carrasco believes a potential solution could be the widespread use of digital watermarking systems that embed information about the media’s provenance into the metadata layer. The Coalition for Content Provenance and Authenticity has developed such a system, but Carrasco doesn’t think it will be widely adopted for at least another year.
“This problem is going to be here forever,” he said. I don’t think people realize how bad this is. ”
