AI-altered photos and videos of the Minneapolis shooting blur reality

AI Video & Visuals


Since last weekend’s fatal shooting by federal agents in Minneapolis, AI-driven images and videos of Alex Preti’s final moments have exploded across the internet, from Facebook and TikTok to Instagram and X.

The rapid spread of artificial intelligence-altered media, many of which showed Preeti collapsing seconds after being shot, has obscured key details of the shooting on social networks. Unlike other AI-generated deepfakes that depict completely unrealistic scenes and are easily identified as fake, many of Preeti’s AI-altered depictions of gunfire appear to be based on verified images, reflecting reality enough to confuse and mislead many online.

And while awareness of advanced artificial intelligence’s capabilities is spreading, some people online are extending their suspicions to the real media, falsely claiming that Preeti’s legitimate photos and videos were altered by AI.

A seemingly AI-enhanced image of an ICU nurse falling forward as federal agents point a gun at her back has been viewed more than 9 million times on X (despite the community pointing out that the image was AI-enhanced). Among other AI-created details, the still image depicts a headless ICE officer.

An AI image showing a police officer pointing a gun at the back of a man lying on the sidewalk.
The AI-controlled image shows a man resembling Alex Preti lying forward, with an ICE officer pointing a gun at his back.NBC News via X

Sen. Dick Durbin, D-Ill., displayed the image during a speech on the Senate floor Wednesday, apparently unaware that the image was not real.

In a statement to NBC News, a spokesperson for Sen. Durbin said, “My office used a photo from the Senate floor that was widely circulated online. Staff did not realize until after the fact that the image had been slightly edited, and we regret that this mistake occurred.”

Other posts feature realistic videos, including a TikTok video of an AI-generated Preti talking to an ICE officer and a Facebook video of a police officer accidentally firing a gun at Preti. This Facebook post is labeled “AI Enhanced” in Community Notes and has been viewed more than 44 million times. It is unclear whether the officer fired Preti’s gun.

Ben Colman, co-founder and CEO of deepfake detection company Reality Defender, said the prevalence of AI-related media posts about mass shootings is concerning but not surprising. An AI-altered image attempting to reveal the identity of the ICE agent who shot and killed Renee Nicole Good, another American killed in recent weeks in Minneapolis, began circulating on the internet earlier this month, leading many people online to inaccurately identify the other person as the agent.

“Over the past few months, we’ve seen a significant increase in photos on social media that include AI-generated ‘touch-ups’ to grainy, blurry photos,” Colman told NBC News. “The problem with these deepfakes is that while they are ultimately still deepfakes, they are at best a rough approximation, at worst a complete fabrication, and ultimately cannot accurately enhance or unmask an individual.”

Regarding the photo Durbin shared with the Senate, Colman said, “Details such as the missing head of the person in the photo demonstrate how harmful it is for these false photos to spread.” “Taking fake photos out of context to make them look real can be distracting and undermine truth and facts.”

The proliferation of AI media has led many people online to falsely claim that Pretti’s authentic video is not real. Experts worry that this dynamic could lead to a phenomenon known as the “liar’s dividend.” In this phenomenon, bad actors use claims that real media is generated by AI to create mistrust and avoid accountability.

Three videos independently reviewed by NBC News show Preti getting into an altercation with federal immigration agents in Minneapolis a little more than a week before her death. However, one of the videos shot by a News Movement reporter has been labeled as AI-generated by some social media users.

A video showing Preti kicking the back of the agent’s car before he was tackled to the ground was confirmed by his family through a representative. A witness who recorded a second verified video of the same incident told NBC News that he hugged Preti and asked if she was okay after an altercation with federal agents.

There are few, if any, tools available to news consumers that allow them to know exactly whether content was created or manipulated by AI. On X, the platform’s AI assistant Grok answered inquiries about the authenticity of the footage, with some of the responses claiming that the authentic footage “appears to have been generated or altered by AI.”

A spate of AI deepfakes has further fueled the misinformation surrounding Preti’s shooting, as multiple right-wing influencers mistakenly identified Preti as another Minneapolis resident.

The proliferation of AI-mediated misinformation and disinformation around breaking news has become more common over the past year, as advancing AI systems can create high-quality images and videos that blur the line between reality and fiction.





Source link