Earlier this month, hundreds of celebrities descended the steps of New York City's Metropolitan Museum of Art for a chance to be photographed at the 2024 Met Gala, one of the year's biggest fashion events. But some of the more widely circulated images from the night included red carpet photos of two A-listers who weren't even in attendance.
The image drawn is katy perry and rihanna The image of her standing in front of a crowd of photographers outside the Met Gala in a themed floral dress was generated by artificial intelligence (AI). Although they are of questionable quality, upon closer inspection it quickly becomes apparent that the carpet is the wrong color and the photographer in the background appears slightly distorted, but the text Perry shared about her As the exchange attests, they seemed legitimate enough to confuse even Perry's own mother. Instagram account.
Instagram's official account acknowledged the existence of the image, commenting, “True Goddess,” nearly 24 hours before Instagram's third-party fact checkers flagged the image as altered.
Fake photos of pop stars at the Met Gala may seem harmless, but these images are just the latest high-profile example of how AI can be used to mislead people.
Images like Perry and Rihanna's that were widely shared on Instagram come as lawmakers grapple with how to implement safeguards around AI. This raises the question of what they are doing.
Meta spokesperson Ryan Daniels responded to a question about the Perry and Rihanna image, saying, “Deceptive efforts rarely target just one platform, so further information on AI and cross-platform fraud “We welcome the study,” he told Yahoo News, responding to questions about images of Perry and Rihanna. Process AI images as “processed photos”.
Bernhard Gademann is CEO of technology investment firm Pioneer Ventures and co-founder of Edu Smart Technologies, which aims to help teachers integrate technology into the classroom. “The development of AI is really rapid,” he said.
Gademan said he predicts that AI will soon become so advanced that even the AI-powered detection tools used to help identify and label platforms will struggle to do so. Ta.
Earlier this month, TikTok announced it would begin automatically labeling videos and images created with AI using the Coalition for Content Provenance and Authenticity's watermarking technology. Currently not applicable to audio-only content.
Meta announced in April that it would start labeling AI-generated content on Facebook and Instagram starting in May. The announcement comes in response to criticism from the Oversight Board, an external group funded by Meta, which recently called the company's AI policies “disjointed” and “confused”. said. The advisory committee was called after Mehta refused to remove a seven-second AI-altered video that showed President Biden inappropriately touching a young woman.
In March, Google introduced a new tool in YouTube's Creator Studio that requires users to disclose if something has been changed or created using generative AI. However, Google has clarified that it will not require creators to report “obviously unrealistic content” or “beautifying filters or other visual enhancements.”
X takes 17 hours to delete explicit AI photos of Taylor Swift
In January, an explicit AI-generated photo of Taylor Swift circulated on X, causing the search phrase “Taylor Swift AI” to trend. The concern was not whether the photos were real or not, but the lack of laws protecting victims of AI and the need for social platforms to implement their own standards for flagging and removing AI.
One post sharing a Swift image shared by an authenticated X user remained accessible for 17 hours and was viewed more than 45 million times before being removed by X. X owner Elon Musk has reduced the platform's moderation team since taking over the company in 2022, but still does not allow synthetic media or non-consensual nudity.
“It's probably no coincidence that certain AI-generated content is particularly popular,” Gademan said, referring to non-consensual AI-generated explicit images like Swift's. “Before the age of AI and deepfakes, there was a saying that ‘seeing is believing.’ But in the age of AI, is seeing believing? That’s the question.”
Swift is not the only celebrity victim of AI porn. Legal experts and victims, many of them teenage girls, are demanding that technology companies and government officials take action to address the growing crisis.
Although several bills have been introduced in Congress, including the AI Labeling Act and the DEFIANCE Act, there is currently no federal law banning deepfake pornography in the United States.
anxiety about the future
With AI so readily available and technology advancing so rapidly, are labels enough?
“There are some technical solutions, but [for AI]There are also completely different dynamics in other areas where production is very harmful to people, both in creation and distribution,” Stanford University researcher Renee DiResta told Yahoo News. “Nobody knows yet what they're going to do about it.”
One of Gademan's concerns is that lawmakers will overreact and overcorrect the current lack of legal guidelines on AI. For Gademan, the benefits of AI technology, which he cites as widespread access to knowledge, the ability to create new content easily and cheaply, and its potential in the medical field, far outweigh the drawbacks, at least for now. .
“This technology is generated by humans, maintained and managed by humans, and its underlying source is data created by humans,” Gademan said. “I think it’s really important to understand this difference. [AI] It is not a force acting alone. Humans are in charge, which is why demanding accountability is so important. ”