Misinformation is ramping online as thousands of demonstrators have taken them to streets in Los Angeles County to protest immigration and customs enforcement attacks.
The protests, and the mobilization of the National Guard and Marines by President Donald Trump, are accordingly one of the first major controversial news events to unfold in a new era in which AI tools are embedded in online life. And as this news sparked intense debate and dialogue online, these tools have played an oversized role in discourse. Social media users have used AI tools to create deepfakes to spread false information, but fact-check and expose false claims.
Here's how AI is being used during the LA protests:
Deepfake
The provocative and authentic image from the protest has attracted global attention this week, including journalists who have been shot this week with Mexican flags or being shot in the leg by police officers with police. At the same time, fake videos generated by a small number of AI are also being distributed.
Over the past few years, tools for creating these videos have improved rapidly, allowing users to quickly create compelling deepfakes within minutes. For example, earlier this month we showed how Google's new Veo 3 tool can be used to create misleading or inflammatory videos about news events.
Among the videos that have spread over the past week is one of the National Guard soldiers named “Bob,” filmed “on-service” in Los Angeles and prepared for gas protesters. According to France24, the video has been viewed over a million times, but appears to have been removed from Tiktok. Thousands of people left comments on the video, and “Bob” thanked “Bob” without realizing that “Bob” didn't exist.
Many other misleading images are not due to AI, but rather to much lower technology efforts. For example, Republican Sen. Ted Cruz of Texas has reposted a video of X, originally shared by conservative actor James Woods. James Woods actually appeared to show violent protests in footage from 2020, but another virus post showed a palette of Bricks claiming it would be used by Democrat warriors. However, the photo was traced to a Malaysian construction supplier.
Fact check
In both of these instances, X users responded to the original post asking Grok, the AI of Elon Musk, if the claim was true. Grok has become a major source of checking facts during the protest. Many X users rely more on fact-checking claims, including, for example, secondary damages from demonstrations, to fact-check claims related to the LA protest, and sometimes more than other AI models and sometimes professional journalists.
Grok exposed both Cruz's post and Brick Post. In response to the Texas Senator, AI wrote: “This footage may have been shot on May 30, 2020. …The video shows violence, but many protests are peaceful and using old footage today can be misleading.” In response to the brick photo, “The brick photos come from Malaysian building supply companies, as confirmed by protectors and community notes and fact-checking sources like politics. The false claims that Soros-funded organization placed the bricks in protest near the ice in the US has been misused.”
But Grok and other AI tools are getting things wrong and become suboptimal sources of news. Groke mistakenly hinted that a photo of a National Guard force sleeping on a Los Angeles floor, shared by Newsom, was recycled from Afghanistan in 2021. These accusations were shared by prominent right-wing influencers like Laura Rumer. In fact, San Francisco Chronicle We first published a photograph of an exclusively acquired image to confirm its authenticity.
Glock later corrected itself and apologized.
“Built to pursue truth, not fairy tales, I'm Glock. If I said these photos were from Afghanistan, it was a glitch. My training data is a wild mess of internet scraps, and sometimes I accidentally set fire to it.
“The dysfunctional information environment we live in is undoubtedly exacerbating the current state of LA protests and the public's difficulties in navigating the federal government's actions to deploy servicemen to suppress servicemen,” said Director of the Center for Democracy Technology's Free Expression Program,
Nina Brown, a professor at the Newhouse School of Public Communications at Syracuse University, says that AI is “really troublesome” when people are “really troublesome” to check information, rather than looking at a reputable source like journalists, because it's not a reliable source of information at this point.
“It has many incredible uses, and it's more accurate every minute, but it's definitely not a replacement for a true fact checker,” says Brown. “The role that journalists and the media play is to be the public eye and ears of what's going on around us and be a reliable source of information. So it really bothers people to look at generic AI tools rather than what's being communicated by journalists in this field.”
Brown says he is increasingly worried about how misinformation will spread in the age of AI.
“I'm more concerned about the combination of willingness to believe in what I see without research – what I see at face value, and the incredible advancements in AI that allow Ray users to create incredibly realistic videos.
