YouTube declares war on low effort AI videos via demonetization

AI Video & Visuals


YouTube is finally taking action on a wave of AI-generated videos that flood the platform by announcing policy changes that make it more difficult for creators to monetize low-quality, mass-produced content.

The update, unfolding on July 15th, targets what many call “AI Slops,” or an endless stream of lazy, repetitive videos that have turned YouTube feeds into content reclaimed.

AI content farm for YouTube

YouTube has now become Ground Zero for AI Content Farm. The channel pumps out hundreds of videos featuring stolen clips combining robotic narration, fake news articles created to drive political narratives, and true crime series that are completely AI-generated, which has been fooled by millions of viewers.

Some AI music channels attract millions of subscribers without creating a single original note.

However, the volume isn't the only problem. That's profit. These Slop Merchants leverage YouTube's partner program to make money from content that requires creativity, research, or effort.

YouTube claims that this is not a new policy, but a “minor update” to existing rules that require “original and authentic” content. The company says things that should have been obvious in nature for a long time. Mass-produced repeat spam is not worth monetizing!

The new policy now covers repeated low-effort videos

Starting July 15th, content featuring AI narration “without personal commentary or storytelling,” slideshow compilation with reused clips, and reaction videos with “Little Original Insights” will not qualify for revenue sharing.

YouTube is targeting “very repetitive formats, especially shorts,” a competitor to Tiktok, a platform that has become a breeding ground for AI slops.

YouTube CEO Neal Mohan recently defended a new tool for generating “From Strated” and promoted the same AI technology that essentially creates content issues.

The company wants to help creators crack down on AI videos while creating them.

What's even more ironic is that Google's AI models, including VEO 3, were trained with YouTube content without the permission of the creator. This platform essentially uses creators' work to build tools that compete with creators, punishing the lowest quality results.

Does this crackdown on AI slop really work?

Updates to YouTube policies sound difficult, but enforcing it won't be a walk around the park. Content moderation is inherently incomplete and slop creators have already adapted.

Adding a few seconds of “personal commentary” to a video generated by AI can suddenly pass the Authenticity test. The vague language surrounding “very repetitive forms” leaves plenty of room for interpretation.

The platform wants to embrace AI as the future, while maintaining quality standards. However, as YouTube users can say, the algorithm does not distinguish between good and bad AI. I want engagement.

Legitimate creators who use AI tools to enhance their content don't have to worry. YouTube revealed that if the content meets other policy requirements, it is permitted to use AI to improve the video.

The target is not individual creators experimenting with new technologies, but industrial-scale content farms.

The actual test is whether YouTube can distinguish between useful AI and harmful slops. This update doesn't solve the AI ​​slop issues overnight, but that's the beginning. After all, when a robot creates content, everyone loses (except for those who cash out the check).



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *