Google deletes AI Disney video after receiving deletion request

AI Video & Visuals


According to Variety, Google has quietly removed dozens of YouTube videos that showed Disney-owned cartoon characters behaving in problematic ways, apparently in response to complaints from Disney. If you try to access many video pages on YouTube, you'll see a generic message saying they have been removed due to Disney copyright claims.

Why YouTube deletion happened after Disney's request

Disney's letter cited rampant inappropriate use of its intellectual property across a range of franchises, from Star Wars and Marvel to animated hits such as Frozen, Moana and Lilo & Stitch. The notice highlighted a trend of clips that appear to have been generated by the company's video generation model, Google's Veo, as well as AI-generated “action figure” images depicting famous characters such as Deadpool, Elsa from “Frozen,” Homer Simpson, and Darth Vader.

The YouTube logo features a rounded red rectangle with a white play icon in the center, displayed on a professional flat design background with soft gray gradients and subtle geometric patterns.The YouTube logo features a rounded red rectangle with a white play icon in the center, displayed on a professional flat design background with soft gray gradients and subtle geometric patterns.

Modeling input and user prompts is key to AI's growing creativity, but Disney has made it clear that its characters and story worlds are protected works. Unauthorized reproduction and derivative depictions, especially depictions communicated on a large scale through major platforms, may infringe copyright and related rights. The takedown highlights how quickly large rights holders are moving to police AI content that crosses the line.

Explaining Google's response and platform policy

Google said it was working with Disney to address the allegations and pointed to existing copyright controls across its products that date back years. The company typically foregoes YouTube's Content ID system, DMCA notification and takedown process, and its “Google-Extended” suite that allows publishers to restrict certain data from being used to train Google's AI models. In practice, compliant takedown notices are quickly removed to maintain the platform's safe harbor protections.

This incident highlights a perspective unique to the age of AI. Some of the flagged clips were generated using models built by Google and hosted on Google's own video platform, according to two people familiar with the incident. This dynamic will become even more urgent, especially as the quality and volume of generated video begins to improve and platforms deploy active filters alongside immediate removal of known IPs.

Copyright law has long distinguished between training and output, but courts are just beginning to examine how that line applies to generative systems. Rightsholders argue that AI tools can create unauthorized derivative works and compete with licensed products and media, regardless of whether the models are trained on so-called “publicly available” material. But creators and AI companies argue that many uses are transformative or user-driven, raising fair use issues that have yet to be resolved.

The YouTube logo features a red play button icon next to the word YouTube in black text, centered around a professional flat design background with a soft blue-gray gradient and a subtle hexagonal pattern.The YouTube logo features a red play button icon next to the word YouTube in black text, centered around a professional flat design background with a soft blue-gray gradient and a subtle hexagonal pattern.

Disney took a more aggressive response than other companies. The new notice to Google follows Disney's move earlier this year to join Universal in suing image generation company Midjourney, framing mass scraping and unauthorized output as systematic infringement. Both publishers and authors have filed separate lawsuits against multiple AI companies, suggesting broader legal risks for platforms that host or enable unlicensed generated content.

Mixed signals in Disney's evolving AI push

The AI ​​company said the raids come largely in tandem with Disney's growing adoption of AI with a $1 billion investment in OpenAI. The three-year license agreement will give Disney preferential access to OpenAI's technology, which will allow users to create Sora videos featuring Disney characters within certain limits, as well as select fan-made shorts to feature on the Disney+ streaming service.

These two approaches are obvious. Disney is actively blocking abuse while creating a tightly controlled and licensed channel for AI remixing of its IP. Consider stricter enforcement of uploads outside of its ecosystem, combined with new official channels to monetize fan creativity without losing control.

What it means for creators and platforms

For the creators, the message is simple. AI is not automatically enabled to use protected characters. Uploaders who create content using branded IP, whether or not they have a disclaimer regarding their use of AI, will be more aggressively removed and likely to infringe on copyright. For platforms, this issue is an operational one, requiring them to create better detection tools, expand rights databases, and ensure that AI-generated tools reflect the contours of licensing obligations to avoid copyright infringement before publication.

YouTube's tools, such as Content ID and disclosure, are a farewell to 2000s design methods and could be complemented by model-level guardrails and automatic filters based on major IP catalogs. If that doesn't help slow the torrent, more cease-and-desist letters and potentially new lawsuits are almost certain to follow.



Source link