New York (CNN) pope francis It wears a heavy white puffer coat. Elon Musk walking hand in hand with her rival GM her CEO Mary Barra. Former President Donald Trump being detained by police in dramatic fashion.
None of these things have ever happened, but AI-generated images depicting them have gone viral online over the past week.
The images ranged from clearly fake to convincingly real in some cases, fooling some social media users. For example, model and television personality Chrissy Teigen murmured She thought the Pope’s puffer coat was real and said, “I didn’t give it a second thought. I can’t survive the technological future.” The image caused a number of headlines as it attempted to debunk the image of Trump not being arrested.
This situation marks the new online reality. The rise of a new crop of buzzy artificial intelligence tools has made it cheaper and easier than ever to create lifelike images, audio and video. A popup may appear.
While these AI tools may enable new ways to express creativity, the proliferation of computer-generated media threatens to further pollute our information ecosystem. This risks exacerbating the challenge for users, news outlets and social media platforms to scrutinize the truth after years of grappling with online misinformation characterized by far less sophisticated visuals. there is. There are also concerns that AI-generated images could be used to harass and further divide internet users.
“There is so much fake content online that it’s so real that most people think they’re real based on their tribe’s instincts rather than what’s actually based on informed opinions. I’m afraid you’ll come to use it as a guide to things like “verified evidence,” said a synthetic media expert who works as an advisor to companies and government agencies, including the European Advisory Board for Meta Reality Labs. says Henry Ajder.
Images can be particularly powerful in inducing emotions when people see them, compared to AI-generated text, which has become popular recently thanks to tools like ChatGPT, says AI in partnership with AI. and Claire Leibowicz, Head of Media Integrity. A non-profit trade association. This can slow people down and make it harder for them to tell if what they’re seeing is real or fake.
In addition, organized villains often attempt to confuse Internet users and provoke specific behaviors by ultimately mass-producing fake content or finding genuine content that is computer-generated. may suggest.
Ben Decker, CEO of Threat Intelligence, said: Group Memetika. “Because if more people had that idea put together and had it in a coordinated way, I think there would be a universe where we would start to see the impact from online to offline.”
Rapidly Evolving Tools
Computer-generated imaging technology has taken off in recent years, from photo-engineered images of sharks swimming across flooded highways during natural disasters to websites that started churning out fake, largely unconvincing, non-existent photos four years ago. progressing rapidly. people.
Many of the recent viral AI-generated images were created by a tool called Midjourney. This is a less than a year old platform that allows users to create images based on short text prompts. Midjourney’s website describes him as a “self-funded small team” with only 11 full-time staff.
A quick look at a popular Facebook page among Midjourney users reveals an AI-generated, seemingly inebriated Pope Francis, elderly Elvis and Kurt Cobain, masks in robotic Tesla bodysuits, and many more. An image of a spooky animal work is displayed. And that’s the story of the last few days.
The latest version of Midjourney will only be available to some paying users, Midjourney CEO David Holz told CNN in an email Friday. According to Holtz’s Discord post, Midjourney suspended access to free trials of previous versions this week. The creator of the Trump arrest image also claimed to have been banned from the site.
The rules page on their Discord site asks users to:
“It’s hard to stay in moderation,” Holtz told CNN. “We’re shipping an improved system soon.” “We incorporate a lot of feedback and ideas from experts and the community, and we try to be really thoughtful.”
For the most part, it doesn’t look like the creators of viral images these days are acting maliciously. Trump’s arrest image was created by the founder of his outlet, Bellingcat, an online investigative journalism outlet, and apparently, even if his users on other social media had no insight. labeled as his fabrication.
Development of safety measures
There are efforts by platforms, AI technology companies, and industry associations to improve transparency when content is computer-generated.
Platforms such as Meta’s Facebook, Instagram, Twitter, and YouTube have policies that restrict or prohibit the sharing of manipulated media that may mislead users. But as the use of AI-generated technology grows, even such policies risk undermining user trust. For example, if a fake image slips past the platform’s detection system, it “may give people a false sense of trust,” Ajder said. “They’ll say, ‘It must be real because we have detection systems that say it’s real.'”
For example, by watermarking an AI-generated image or including a transparent label in the image’s metadata, anyone viewing an image on the Internet will know that it was created by a computer. Work is also underway on a technical solution to make it easier to understand. The Partnership on AI has developed a set of standard and responsible practices for synthetic media with partners such as: ChatGPT creators OpenAI, TikTok, Adobe, Bumble, and BBC include recommendations, including how to disclose AI-generated images and how companies can share data about such images .
“The idea is that all these institutions are committed to disclosure, consent and transparency,” Leibowicz said.
A group of tech leaders, including Musk and Apple co-founder Steve Wozniak, wrote an open letter this week calling on artificial intelligence labs to halt training of the most powerful AI systems for at least six months, urging society and institutions to serious risk to Human race. “Still, it’s not clear if any labs will take such steps. And as the technology improves rapidly and becomes available beyond the relatively few groups of companies committed to responsible practices.” , lawmakers may need to get involved, Ajder said.
“This new age of AI cannot fall into the hands of a few big companies making big bucks with these tools. We need to democratize this technology,” he said. “At the same time, there are very real and justifiable concerns about adopting radical open approaches, such as open sourcing tools or minimizing restrictions on their use. I think legislation will probably play a role in governing some of the open models.”