Anna Barclay/Getty Images
With less than 200 days until the November election, the social media giants have outlined plans to ensure users can distinguish between machine-generated content and human-generated content.
Editor's note: A version of this article first appeared in our newsletter “Trusted Sources.” Subscribe here for a daily digest documenting the evolving media landscape.
new york
CNN
—
Big Tech is rushing to address the flow of AI-generated images flooding social media platforms before machine-generated renderings further pollute the information space.
TikTok announced Thursday that it will start labeling AI-generated content. Meta (parent company of Instagram, Threads, and Facebook) announced last month that it would start labeling such content. And YouTube introduced rules requiring creators to disclose if their videos were created by AI so the label could be applied. (Notably, Elon Musk's X has not announced plans to label AI-generated content.)
With less than 200 days left until the deadly November election and technology advancing at breakneck speed, each of the three social media giants is helping their billions of users reliably distinguish between machine-generated and human-generated content. He outlined a plan to
Meanwhile, OpenAI, the creator of ChatGPT, which allows users to create AI-generated images through the DALL-E model, announced this week that it will release a tool that allows users to detect when an image has been constructed by a bot. The company also announced it was launching a $2 million election fund with Microsoft to combat deepfakes that can “deceive voters and undermine democracy.”
Initiatives from Silicon Valley recognizes that the tools being built by the tech giants have significant potential to wreak havoc in the information space and seriously damage the democratic process. It has been stated.
Images generated by AI have already proven to be particularly deceptive. Just this week, a purported AI-generated image of pop star Katy Perry posing on the Met Gala red carpet in a metallic and floral dress fooled people into thinking she was actually not attending. Although she wasn't there, she was led to believe that Katy Perry was in attendance at the annual event. The image was so realistic that Perry's own mother believed it to be real.
“I didn't know you went to the Metropolitan Museum of Art,” Perry's mother texted him, according to a screenshot posted by Perry.
“LOL, Mom, you got killed by the AI too, be careful!” Perry replied.
Although this viral image did not cause any serious damage, it is not difficult to imagine the scenario. Especially before a big election In that case, the fake photo could mislead voters and cause confusion, possibly favoring one candidate or the other.
However, despite repeated and alarming warnings from industry experts and stakeholders, the federal government has so far taken no steps to establish safeguards around the industry. That has led big tech companies to take their own steps to rein in technology before bad actors exploit it for their own benefit. (What could go wrong?)
It remains to be seen whether industry-led efforts will be successful in curbing the spread of harmful deepfakes. Social media giants have a set of rules banning certain content on their platforms, but they often fail to properly enforce the rules and allow malicious content to spread to the masses before taking action. History has repeatedly shown that.
As AI-generated images increasingly bombard the information environment, this poor record does not inspire much confidence, especially as the United States heads toward an unprecedented election that puts its very democracy at risk. do not have.