YouTube Aims To Limit Low-Quality, Repetitive Content Created By AI

Image Credits:Omar Marques/SOPA Images/LightRocket(opens in a new window)/ Getty Images

YouTube is set to  update its policies to limit creators from monetizing “inauthentic” content, such as mass-produced or repetitive videos types of content increasingly generated using AI tools.

Starting July 15, the platform will update its YouTube Partner Program (YPP) monetization rules, offering clearer guidance on what content qualifies for revenue and what doesn’t.

While YouTube hasn’t published the exact wording of the new policy, its Help page states that creators must always produce “original” and “authentic” content. The upcoming update aims to clarify what qualifies as “inauthentic” in the current content landscape.

YouTube Eases Concerns Over Monetization of Reaction and Clip-Based Content

Some creators feared the policy update might restrict monetization of content like reaction videos or those using clips, but YouTube’s Head of Editorial & Creator Liaison, Rene Ritchie, clarified in a post that this won’t be the case.

In a video update posted Tuesday, Ritchie described the change as a “minor update” to YouTube’s long-standing YPP policies, aimed at more clearly identifying mass-produced or repetitive content.

He also noted that such content has already been ineligible for monetization for years, since viewers typically see it as spam.

What Ritchie doesn’t mention, however, is how much simpler it has become to produce this kind of content.

Rise of AI Tools Fuels Wave of Low-Quality, Mass-Produced Content on YouTube

The surge in AI tools has led to a flood of low-quality, AI-generated material on YouTube—often referred to as “AI slop.” These include videos with AI-generated voiceovers layered over images or reused clips, made possible by text-to-video tools. Some AI music channels now boast millions of subscribers, and fake AI-generated news videos—such as those about the Diddy trial—have attracted millions of views.

Earlier this year, 404 Media reported that a viral true crime series on YouTube was entirely AI-generated. Scammers misused YouTube CEO Neal Mohan’s image in an AI-generated phishing scam on the platform, even though users can report deepfakes using available tools.

Although YouTube portrays the upcoming policy shift as a minor update or clarification, the growing presence of AI-generated content—and the monetization of it—poses a real threat to the platform’s credibility and overall value. It’s no surprise, then, that the company is pushing for clearer rules that would enable large-scale removal of AI slop creators from the YouTube Partner Program.


Read the original article on: TechCrunch

Read more: YouTube Introduces New Shopping Stickers For Shorts Videos