NEW YORK (AP) — Create art using artificial intelligence imagingtry on clothes in a virtual fitting room, or help design an advertising campaign.
But experts fear the dark side Some of the easily accessible tools can exacerbate what is primarily harmful to women: non-consensual deepfake porn.
A deepfake is a video or image digitally created or altered by artificial intelligence or machine learning. Porn created using this technology began spreading across the internet a few years ago when a Reddit user of his shared a clip of a female celebrity’s face on the shoulders of a porn actor.
Since then, deepfake creators have targeted online influencers, journalists, and others with public profiles to spread similar videos and images. Thousands of videos exist on many of his websites. Some also offer users the opportunity to create their own images. Basically, everyone is allowing anyone to turn someone they like into a sexual fantasy without their consent, or use technology to harm their former partner.
Experts say the problem has grown as it has become easier to create sophisticated and visually compelling deepfakes. And they say things could get worse with the development of generative AI tools that are trained on billions of images from the internet and use existing data to spit out new content.
“The reality is that technology will continue to permeate, continue to evolve, and will continue to be as easy as pressing a button,” says Adam, founder of EndTAB, a group that provides technology-assisted abuse training. Dodge said. “And as long as it continues, people will undoubtedly continue to abuse that technology to harm others, primarily through online sexual violence, deepfakes his pornography, and fake nude images.”
Noel Martin, who lives in Perth, Australia, experienced that reality. Her 28-year-old found her porn a deepfake of herself ten years ago when she used Google to search for images of herself out of curiosity. To this day, Martin says he doesn’t know who created the fake image or the video of her engaging in intercourse that she would later find out. She suspects someone took the photo posted on her social media, her page or elsewhere and manipulated it into porn.
Horrifyingly, Martin contacted various websites over the years to remove the image, some with no response. Others took it down, but she quickly found it.
“You can’t win,” said Martin. “This is something that will always be there. It’s like it ruined you forever.”
She said the more she spoke out, the more the problem escalated. Some people have told her that her outfit and posting of images on her social media accounts for her harassment.
Ultimately, Martin turned his attention to the law, advocating Australian domestic legislation that would fine businesses up to A$555,000 ($370,706) for failing to comply with notices of removal of such content from online safety regulators. Did.
But when countries have their own laws for content created on the other side of the world, it’s nearly impossible to control the internet. Martin, now a lawyer and legal researcher at the University of Western Australia, believes the problem must be controlled by some kind of global solution.
In the meantime, some AI models say they have already curtailed access to explicit images.
OpenAI says it has removed explicit content from the data used to train its image generation tool DALL-E. This limits the user’s ability to create these types of images. The company also says it filters requests and blocks users from creating AI images of him for celebrities and prominent politicians. Another model, Midjourney, blocks the use of specific keywords and encourages users to report problematic images to moderators.
Meanwhile, startup Stability AI rolled out an update in November removing the ability to create explicit images using image generator Stable Diffusion. These changes come in response to reports that some users are using the technology to create celebrity-inspired nude photos.
According to Stability AI spokesperson Motez Bishara, the filter uses a combination of keywords and other techniques such as image recognition to detect nudity and return blurry images. However, since the company has published its code, it is possible for users to manipulate the software to generate anything they want. Bishara said Stability AI’s license “extends to third-party applications built on Stable Diffusion” and strictly prohibits “abuse for illegal or immoral purposes.”
Some social media companies are tightening their rules to better protect their platforms from harmful material.
TikTok last month required all deepfakes or manipulated content showing realistic scenes to be labeled as fake or altered in some way, making deepfakes of private individuals and young people no longer possible. In the past, the company has banned sexually explicit content and deepfakes that mislead viewers About real world events, cause harm.
Gaming platform Twitch also recently updated its policy on explicit deepfake images. This is because during a livestream in late January, he was found to have a popular streamer named Atrioc deepfake his porn website open in his browser. The site featured fake images of fellow Twitch streamers of his.
Twitch already banned blatant deepfakes, but any glimpse of such content, even if it was meant to express anger, would be “taken down and enforced.” ,” the company wrote in a blog post. In addition, intentional promotion, creation, or sharing of material is grounds for immediate ban.
Other companies are also trying to ban deepfakes from their platforms, but it takes diligence to prevent them.
Apple and Google recently announced that they had removed an app from their app stores that was running sexually suggestive deepfake videos of actresses to promote their products. Research into deepfake porn is uncommon, but according to a report released in 2019 by AI firm DeepTrace Labs, it is almost entirely weaponized against women, with Western women being the most targeted. An actress and South Korean K-pop singer followed suit.
The same apps removed by Google and Apple displayed ads on Meta’s platforms including Facebook, Instagram and Messenger. Meta spokesperson Dani Lever said in a statement that the company’s policies restrict both adult content generated by his AI and non-AI generated adult content, and that the app’s pages on the platform It says it limits ads.
In February, adult sites such as OnlyFans and Pornhub, as well as Meta, began participating in an online tool called Take It Down., which allows teens to report explicit images and videos of themselves from the Internet. The reporting site works for regular images and AI-generated content. This is a growing concern of groups regarding child safety.
“People asked our senior leaders what are the rocks coming down the hill that we worry about? Then there is AI, especially deepfakes,” said Gavin Portnoy, spokesman for the National Center for Missing and Exploited Children, which runs the Take It Down tool.
“We haven’t yet developed a direct response to that…” said Portnoy.
