When cameras caught former President Trump entering a Manhattan criminal court earlier this week to face 34 indictments, many images of him began circulating on social media.
Some of the fabricated images appeared to be mugshots of the former president, but his attorney told reporters the former president did not take reserved photos during police processing on Tuesday.
Nonetheless, Trump’s 2024 presidential campaign has capitalized on this trend, sending out emails promoting t-shirts with fake Trump mugshots that can be purchased online. It’s a tactic Trump’s team has used before, in searches for classified documents, funded by both his impeachment and his FBI raid on Mar-a-Lago.
While that image appeared to be a more overt fabrication, others were noticeably more sophisticated and may have been generated by artificial intelligence. and raise concerns about the sophistication and accessibility of AI power tools.
Here’s what you need to know about these AI-generated images:
What other images have been circulated by word of mouth?
A fabricated image of Pope Francis wearing a floor-length white puffer jacket received more than 30 million views in several posts last week. It’s the latest in a series of recent images that have flooded social platforms.
AI researcher Henry Azdar, who hosts a podcast about the technology on BBC Radio, said: “These cases, which feel low risk, are worrisome because if our guards aren’t in place, the general public will It makes them more susceptible to infection,” he told ABC News.
The fictional image of Pope Francis was first posted by a user on a Subreddit dedicated to displaying work created by an image generation program called Midjourney, and was likely created by the tool.
Last week, Midjourney announced it had suspended the ability for users to generate certain image sets for free due to “unusual demand and trial and error.” This service is currently only available on various subscription plans.
The tool, reviewed by ABC News, is one of several artificial intelligence-powered text-to-image tools that allow users to enter natural language descriptions called prompts and get an image in return. to
Some tools, such as OpenAI’s DALL-E 2, do not allow users to create images of public figures. Their content policy also states that users should not upload images of people without their consent.
Why are the experts alarmed?
It is the hyperrealism of images that plagues synthetic media professionals like Ajudah.
“As I scroll through social media, these images fly by subconsciously,” he said. “You don’t have to look critically at an image to influence how you see people and the world.”
On the day of Trump’s court hearing, ABC News discovered that thousands of counterfeit images of Trump had been generated on its platform. While only 12 of his cases jumped onto his platform and went viral on social media, it wasn’t the first time the general public had seen his AI fakery associated with him.
Midjourney founder David Holz is working on a “more nuanced moderation policy based on community feedback,” telling ABC News when asked about the viral images created using the tool. said.
“There are always risks that are difficult to predict, and the goal must be to find them, adapt and move forward,” Holtz said in an email.
On March 20, when news of the possible indictment of former President Donald Trump made headlines, a series of fake photos circulating on Twitter purported to show his arrest.
The former president had not yet been arrested, but he (erroneously) predicted an impending arrest a few days earlier.
The Trump photo, which falsely depicts an event that didn’t happen, was created by Elliot Higgins, founder of the Dutch-based investigative journalism agency Bellingcat.
“Taking pictures of Trump getting arrested while waiting for Trump’s arrest,” Higgins tweeted on March 20, along with the image. Higgins told ABC News that she created the series of images for fun.
Higgins told ABC News that he was surprised that these fake images of Trump got so much attention, but it was nice to see that the discussion about the creation of AI images has been encouraged.
What is driving this wave of surreal fakery?
Experts like Sam Gregory, executive director of the global human rights network WITNESS, say it’s a combination of factors such as the ease and accessibility of these tools, improved photorealism, and mass production capabilities.
“This is really worrying,” said Gregory, who has spent the past five years leading an initiative to prepare journalists and educate the public about the potential harm of AI-generated media.
Gregory added that a commercial arms race among AI companies has contributed to the rapid development of these tools and lack of safeguards.
“We are also in the midst of a head-on commercial rush about the needs of Silicon Valley entirely, ignoring the needs of most people across the United States. Wait a minute, you might say, where are the safety devices here?
what is the solution?
Ajder said companies developing AI technology must take responsibility for limiting access by “creating friction for bad actors.”
Steps like providing bank details or verifying a user’s identity through other accounts could make the tool more difficult to exploit, he told ABC News.
Rather than exposing the pressure of identifying AI-generated media to the public, Gregory emphasized the importance of making detection tools widely available.
“We’re going to live more distributed, more available, and in many ways more fun, with tremendous creative power, but really understanding how to put those guardrails in place.” need to do it.
Hundreds of top AI researchers and tech industry luminaries signed a letter this week urging labs to immediately pause training for powerful new AI systems for six months.