Real faces, fake combat, and a glimpse of the future of AI in the movie.
Recently, a hyper-realistic AI-generated clip of Tom Cruise fighting Brad Pitt went viral online. The footage, created using Seedance 2.0, an advanced video model developed by ByteDance, looked cinematic enough to fool many viewers, even though no such battle actually occurred. The results caused both excitement and alarm across the Hollywood and tech communities alike.
A short Tom Cruise vs. Brad Pitt scene generated from a short text prompt shows how far AI video has come in just a few years. Early AI systems were largely unable to animate recognizable faces. Today’s models such as Seedance 2.0, OpenAI’s Sora, and Google’s Veo can generate highly detailed motion, lighting, and even synchronized audio from simple text or image input.
This progress also sparked a backlash. Major studios including Disney and Paramount have accused SeaDance of using unauthorized copyrighted material and likeness in their productions, and filed cease-and-desist letters against ByteDance shortly after the clip went viral online. The Motion Picture Association and Actors Guild have raised concerns about the potential impact of AI on copyright, consent and creative work.
But while AI-generated battle scenes could shake up social media, experts say generating long-form narratives remains a major engineering challenge. Maintaining a consistent character identity, scene continuity, dialogue, and plot structure over 90 minutes is much more difficult than creating a few seconds of action. Current models often struggle with temporal coherence and require significant computational power to extend beyond short clips.
What does this mean for Hollywood? For now, AI is more likely to serve as a tool to assist with previsualization, green screen work, or rough cuts, rather than replacing directors and actors. Watch the video to explore how today’s AI tools are reshaping visual storytelling and what’s still holding it back.
