YouTube accused of training AI with new video prompt: “Does this feel like AI slop?”

AI Video & Visuals


YouTube has just launched a new pop-up survey asking viewers whether their videos look like “AI slop,” sparking speculation that it may be training its own bots. Starting Tuesday, instead of asking whether a video is relevant to a search, YouTube may ask users to share where a clip ranks on their personal AI detection meter.

This reaction may sound paranoid if you don’t know what other companies have done with your data.

Rate YouTube’s performance

Starting March 17, YouTube viewers will see new pop-up prompts during videos. “Is this what AI slop feels like?” it asks.

Users can label the video’s feel as either “not at all” or “very” sloppy.

YouTube

One possible reason for this change is the company’s stated intention to combat the influx of low-quality content generated with large-scale language models (LLMs). In January, CEO Neil Mohan called this a priority for 2026 in his annual letter to YouTube.

“It’s becoming increasingly difficult to tell what’s real and what’s generated by AI,” he says. “This is especially important when it comes to deepfakes.”

“To reduce the spread of low-quality AI content, we are actively building on established systems that have shown great success in combating spam and clickbait and reducing the spread of low-quality, repetitive content.”

He also talked about expanding YouTube’s own LLM, which allows users to create simple games with videos and text prompts.

The letter comes after a report found that more than 20% of YouTube content will be driven by AI by the end of 2025.

Is YouTube fighting AI or training it?

Regarding X, the app that “promises everything but CSAM this time,” people plagued by AI attacks are accusing YouTube of using us to train the AI ​​to create higher-quality slop, rather than training algorithms to suppress low-quality slop.

View tweets "YouTube isn't protecting you by asking,
@barkmeta/X

“When YouTube asks, ‘Does this feel like AI slop?’ it’s not like they’re protecting you,” @barkmeta wrote. “They’re using you to train the next AI and make it so good you won’t be able to tell the difference.”

“And they let you do it for free.”

“YouTube isn’t banning AI slop, they’re making the AI ​​label it so it can train the next model without looking like AI slop,” @TukiFromKL argued.

View tweets "YouTube added a pop-up that says,
@Badabo/X

User @birdabo lamented that YouTube’s new prompts “sound like a good thing until you realize they’re literally turning 2 billion users into unpaid AI trainers.”

Many of the people promoting this theory are AI fans or developers themselves. For some reason, they do not like to take other people’s labor and use it to enrich themselves without credit or reward.

Either way, it’s no surprise that modern internet users would think of YouTube’s sloppy questions as yet another attempt by all of us to unconsciously train the AI. In 2018, it was discovered that these reCAPTCHA bot detectors were using people’s answers and puzzle-solving skills to do just that.

Google, the owner of YouTube, also owns reCAPTCHA.

And just this past weekend, news broke that Niantic used all the cute photos you took with Pokémon Go to train its AI delivery bot.

No wonder the term “AI paranoia” is starting to become popular.


The internet is a chaotic place, but we’ll break it down for you in one email a day. Sign up for the Daily Dot newsletter here.



Source link