A person familiar with the exchange, who shared details of the private exchange on the condition of anonymity, said Meta employees said such images could be treated as manipulated media and subject to certain conditions. I replied to the operatives that it was being investigated instead of being deleted in . By independent fact-checkers who work with the company to investigate misinformation and apply warning labels to questionable content. The approach upset campaign officials, who said fact-checkers were slow to react to viral falsehoods and online he missed content that was rapidly replicated across platforms.
This approach may also involve significant carve-outs of candidates, officials and political parties. Meta exempts politicians from fact-checking under a system that company executives have defended by saying political statements are sacrosanct.
A Meta representative didn’t respond to a question from The Washington Post this week about whether AI-generated images were included in fact-checking exemptions granted to politicians. Meta spokesperson Dani Lever simply pointed to the company’s policy on fact-checking, saying, “We have been exposed as ‘false, falsified, or partially false’ by a bipartisan third-party fact-checking organization. ,” explains how the company will deal with it. The guidelines say nothing about AI-generated media and who can post it.
AI-generated images bring new dynamics to the heated debates over political speech that have confused the tech giants in recent years. There are open questions about who creates and who owns such content, and it has been treated differently by various social networking sites.
on twitter rule Although it is unevenly enforced for “synthetic, manipulated, or out-of-context media”. The Elon Musk-owned company explicitly joins Facebook. special consideration To “elected government officials” who may violate the platform’s policies.
TikTok, on the other hand, broadly bans images and videos that “mislead users by distorting the truth of events and causing serious harm,” but short videos from popular apps are quickly leaked to other sites. Additionally, Google prohibits content that “misleads users through manipulated media related to political, social issues, or issues of public interest.”
The problem of politicians posting AI-generated images is no longer entirely theoretical. Trump posted an AI-generated prayer image on his own social networking site Truth Social last month. His Facebook account was restored earlier this year — necessitated, according to Facebook’s description, “for his tribute to those who engaged in violence in the Capitol on January 6, 2021.” following his two-year ban. Also, a recent post disseminating a campaign statement and promoting a rally to his 34 million followers on Facebook, like he did in previous campaigns, will be on this platform as part of a White House bid. is clear that it intends to use
Rapidly advancing AI technology poses challenges for Meta and its peers, said digital media strategists at both companies. But there was disagreement about whether the politician’s carve-out should be extended to synthetic media.
Julian Mulvey, who created the 2016 ad for Senator Bernie Sanders (I-Vt.) and the 2020 ad for the Biden campaign and the Democratic National Committee, says that with the power of technology, AI-generated content He said it would be a different kind of political speech. It’s worth protecting under the First Amendment, but it also protects users and voters.
“A warning label would be appropriate as we venture into this new territory,” says Mulvey.
Eric Wilson, director of the Center for Campaign Innovation, a conservative nonprofit, says the onus is on voters and campaigns, not private companies, to decide what’s appropriate. “I think we have work to do to make sure voters are sufficiently sophisticated, and campaigns ultimately have a moral obligation to be honest and truthful with voters. “But it’s not the platform that forces it.”
Trump’s move to share a deepfake of himself added to the flood of AI-generated content surrounding his indictment in New York. Perhaps, if not entirely convincingly, the first major political news event to see sophisticated synthetic media flow freely on the Internet.
A fake image of Trump being arrested has been viewed millions of times on Twitter. Eric Trump, one of the former president’s adult sons, also joined in, sharing a fake image of his father marching with his followers on the streets of New York. Further afield, a defaced video of Manhattan District Attorney Alvin Bragg indicting Trump went viral from TikTok to Twitter, garnering tens of thousands of views in the process.
The new pervasiveness of technology has also started a debate about whether it is possible or should be. Used to power standard issue campaign ads. Mr. Wilson cited two examples of his overseas. Personalized Video He is a Swedish outreach strategy that includes greetings and an Indian campaign that makes it appear as if politicians are speaking in different dialects.
In the US, ad makers are still getting used to the technology and weighing what is ethically and legally possible.
Neil Goodman, a digital strategist for the Democratic Party, used DALL-E, developed by OpenAI, the creators of the AI language model ChatGPT, to create images of computer-typing geese and candidates for San Mateo County, California. What to do with the geese invading Foster City, Calif., was a point of contention in the campaign, Goodman said, pointing out that the text of the email was written by a human.
“But a goose writing an email isn’t something you can easily Photoshop,” said Goodman. “In this case, the email exceeded expectations.”
Goodman’s firm also worked for the Democratic-backed candidate in the recent Wisconsin Supreme Court election. states that certain situations require “human oversight at every step of the process.”
Mulvey, a Democratic media consultant, said he tried Midjourney, one of the leading AI image generators alongside DALL-E, and typed a prompt into ChatGPT.
“Midjourney has been particularly impressive in terms of the images it can produce,” says Mulvey. “I typed ‘a construction worker in a helmet checking a tablet device next to a safe water hole’. Here are the possibilities to take advantage of it — you can specify the shot, the look, and the style, rather than drawing on a stock image. ”
But first, the campaign must decide what to disclose about the tools behind the ad and what criticism the use of synthetic media will raise, Mulvey said. Already, he added, campaigns have adopted significantly different standards for fact-checking claims and other message content.
Nick Everhart, founder and president of Republican advertising firm Content Creative Media, says copyright concerns alone are a stumbling block. A class action lawsuit filed earlier this year accuses some of his leading AI image generators of illegally scraping the artist’s work from his web.
“It’s a dangerous road not worth taking from the start,” Everhart said.