Mizzy’s viral video shows flaws in TikTok’s AI moderation system

AI Video & Visuals


A five-minute video posted by London-based TikTok “prankster” Mizzy will make you wonder why they have stayed on the platform for so long.

In the video, an 18-year-old boy appears to be wreaking havoc. In some, Mizzy (real name Bakari Ogaro) is alleged to have approached and threatened a woman standing alone. Police then made an arrest.

Ogaro’s account has now been removed from TikTok, but he previously posted with impunity. A video was posted online two weeks ago that appeared to show him entering a stranger’s house, sitting on a couch, worrying his family.

TikTok’s delay in taking steps to remove the videos seems puzzling to users who spend time watching videos that are deemed pranks. But it highlights a fatal flaw in content moderation on social media. In other words, five minutes of flipping through his videos is a lifetime compared to the mere seconds it takes a content moderator at a major tech platform to make a decision.

TikTok employs 40,000 content moderators worldwide. But that number pales in comparison to what they are expected to rule. A statement to parliament in 2020 suggested that TikTok users in the UK are posting 1.6 million videos to the platform per day, a figure that is likely up significantly.

Not all of those videos are viewable by humans. TikTok’s content moderation system first uses artificial intelligence (AI), especially computer vision, to screen out videos that may violate rules. These videos are sent to human moderators to mark the AI’s homework.

Other videos flagged as rule violations by users can be sent to human moderators for review. However, viewers may not have chosen to report the video. “Prank videos have a strong social media tradition and resonate with users’ desire for surprise, entertainment and irresistible stimulation,” said Tom Dibon, who studies TikTok culture at the Hebrew University of Jerusalem. rice field.

TikTok removed about six of the 1,000 videos posted on the app between October and December 2022, the latest period for which data is available. In the UK he had 2.1 million videos removed in the same period, 75% of which were removed before anyone even watched them.

A TikTok spokesperson said: I: “Our Community Guidelines clearly prohibit content that promotes criminal activity. In connection with this issue, we have banned accounts that violate these guidelines.”

TikTok’s system is nothing new across social media. It also works relatively well for videos with obvious problematic content. Nudity, guns, and weapons are all detected by AI, and human moderators can check if the context in which they appear is prohibited. TikTok internally characterizes its approach to moderation as “AI for scale, people for context.”

The problem with the video in this case is that it looks harmless to the AI. If it weren’t for the context that the house O’Garo allegedly entered wasn’t his, we would see a figure sitting on a couch through the door. “Moderation decisions often involve evaluating the context and intent behind a video. Determining whether a particular prank video crosses the line of criminal activity is subjective. and requires careful analysis,” Divon said.

When a human watches the video, they know immediately that something is wrong. However, the AI ​​apparently did not flag it for human review before the video went public. Why?

Even so, TikTok’s army of human content moderators have only a few seconds to decide if content violates the platform’s rules. One of his current moderators said that in one shift he is tasked with checking 1,000 videos. This requires quick decisions, which may result in some videos slipping off the net.

Liam McLoughlin, a lecturer at the University of Liverpool specializing in social media and content moderation, said there are questions beyond moderation to ask. “I have a bigger question: Why are young people making videos, breaking into strangers’ houses, stealing dogs, and harassing people?”

Videos like this exist because they have an audience. “We know that platforms are designed to induce certain user behaviors, such as prolonged browsing in apps and content that is not abusive,” McLoughlin said. I. “However, platforms such as TikTok and YouTube also appear to contain perverted incentives that encourage dangerous, anti-social, or illegal behavior.”

McLoughlin added, “Usually people think of being arrested as a moment of shame, a public humiliation. It suggests that it was aimed at a wider audience and public influence, with friends recording and issuing statements to fans.”

McLoughlin said banning illegal videos is good enough. “But if platforms are serious about blocking this kind of content, they need to go a step further and investigate the techno-sociological impact of their platforms and take a closer look at what’s behind their recommendation algorithms. There is.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *