Tiktok: AI-generated fake copies of real videos distributed in npr

AI Video & Visuals


The screenshot on the left is from the original Tiktok video. It shows the woman making a gesture with one side of her hands. "She's probably panicked" It will be displayed on the screen. The screenshot on the right is a copy that uses the persona that is clearly generated by AI. It shows someone on a boat in the waters, with a headshot of the man in the lower left corner. words "She's probably panicked" It will be displayed on the screen.

Tiktok researchers and users say there is another type of deception to be noted in the hit video app: Deepfark video, which copies the exact words of the real creator, is another voice. In this case, the screenshot on the left is from the original Tiktok video. On the right is a copy using the persona that was clearly generated by AI.

Bronson arcuri/npr/@aliunfiltered_


Hide captions

Toggle caption

Bronson arcuri/npr/@aliunfiltered_

Millions of Tiktockers mistakenly stated last week that they had “installed an incinerator at Alligator Alcatraz,” referring to internet conspiracy theory, watching a false video showing that the furnace was installed in a state-run immigration detention facility in the Florida Evaglazes and is widely available despite no evidence.

One of the videos circulating rumors attracted nearly 20 million viewers. It prompted conversations in Tiktok, where creators became heavier with their own take, and in some instances, tried to condemn unfounded theories.

But there is one tactic that stands out in this familiar dissonance of messy online virality. In other words, it's a realistic looking ticutoker that explains directly to the direct camera of incinerator conspiracy theory. Two forensic media experts consulted by NPR, the speaker's images and voices appear to have been created using artificial intelligence tools. The Twist: The words spoken in the video are exactly the same as words from another video posted by another Tiktok account a few days ago. The copied version had over 200,000 views on Tiktok.

Deepfake researcher – AI-generated images and videos trick people into thinking they are real. The replication appears to represent a new way that AI is being used to deceive.

Dallas creator Ali Palmer posted on Tiktok as @aliunfiltered_ and made a video about his father who jumped off a Disney cruise ship to save his child.

She said that copying on Tiktok is ramping, but usually spam accounts that do that will repost her entire video. She said that the AI-driven accounts that recite her words by those who have been generated are new.

“It's upsetting. It feels like a violation of privacy,” said Palmer, 33.

With all sorts of copies, Palmer reported it to Tiktok, but nothing happens. “That's incredibly frustrating.”

Han Farid, a professor at the University of California, Berkeley, is studying digital forensics, and here he says the new thing is that the words of the average person are being stolen.

“Always we see people's identities being employed to do things that push hawk crypto scams and fake treatments for cancer, but they're usually famous people or influencers,” Farid said.

Using digital forensic tools, Farid analyzed copied incinerator videos, Disney cruise videos, and other videos posted on the same account as NPR's request, and concluded that it was an AI product.

“It's really easy with today's AI tools and something that easily slips through the cracks in content moderation,” he said.

Copying videos using AI does not appear to violate Tiktok's policy, but the platform says users need to “label all AI-generated content, including realistic images, audio and video.”

After this story was published, the label appeared, showing videos from two accounts where NPR was identified, including the media that was generated by AI.

In addition to being more and more deployed for malicious purposes, Deepfakes has become more refined in recent years.

The technology is disguised as politicians including Secretary of State Marco Rubio, former President Joe Biden and Ukrainian President Voldy Zelensky. The rise of Deepfake's “Nudify” tool has prompted Congress to pass federal law this year to counter the spread of non-consensual intimate images, including fake nudes created by AI.

An even more blurry fact from fiction is this latest approach to exploiting Tiktok's viral moments by having the fictional creator recite the words of the real creator.

It is difficult to measure how widespread practice is in Tiktok, which is used by more than a billion people around the world. NPR was unable to identify the people or motivations behind the accounts replicating the creator's words. The account did not respond to requests for comment. Not even Tiktok or the creators whose words are cribbed.

There are some similarities between the two accounts identified by NPR as using words from other creators and using images and voices that were clearly generated by AI. Each has around 10,000 followers, but no one follows. Many of the videos posted by both accounts It depicts a black persona called “low-quality deepfakes” created by Berkeley's Farid using AI. And each account stole words from another Tiktoku creator on a variety of viral topics, ranging from a woman who received a facelift in Guadalajara, Mexico, to a woman wearing a dog collar to an ex-Meghan Markle Dance in the delivery room of a hospital.

In Meghan's video, the creators took on the British accent. Others assume that the audio is completely different registers.

“The biggest story is when you take a step back and look at the entire account. When you look at one video compared to another, it is clear that the persona voice changes from video to video.” Linvill also reviewed the video at NPR's request, and concluded that it was created with the help of AI tools.

Together, the two explanations drive millions of views by grabbing the story of the virus that is more skewed towards tabloid feed than political dramas. But researchers tracking state-sponsored information wars are regularly doing it, as government-sponsored actors also regularly test new strategies to narrow down virality.

Linvill is studying how countries, including China, Russia and Iran, use digital tools for propaganda. He says creating AI personas such as Faux News Anchors is a tactic that will also be used in state-sponsored impact manipulations. NPR found no indication that the accounts it identified were part of such a campaign, but it often overlaps with the tactics used by deceptive. State officials and accounts seeking to gain engagement on social media platforms.

“Stepping content on social media is as old as social media,” Linville said. “What we're seeing here is that AI is doing what we've seen AI get really good over and over again. It's about codifying things and making the process cheaper, faster and easier.

Alex Hendrix was scrolling through Tiktok this week when he watched two consecutive videos about the Florida incinerator plot. It's normal to see many creators focus on the same theme, but these videos hit him The monologues are creepy identical, and are unusual because they were just eerie identical with different sounds.

So Hendrix made a video of Tiktok pointing out this. Still, it has little view.

“I know there is a lot of fake news on Tiktok. I was skeptical of something because of the AI, but this kind of copying felt new and crazy.” “That's why I tell everyone. Don't believe anything you see. And cross-referencing everything you see before you share it on Tiktok,” he said. “But I don't know if they'll hear.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *