LTX Video Breaks the 60-second barrier and Redefines AI Video as a Longform Media

AI Video & Visuals


Israeli AI startup Lightricks is best known for its viral mobile apps such as FaceTune and Videoleap, but it pushes its professional production area deeper with technical milestones that set it apart from its peers with generative videos. With the release of the new autorasation video model LTXV, the company claims that it will be able to generate clips that are longer than 60 seconds long, the current standard length of AI video. This includes Openai's Sora, Google's VEO, and Runway's Gen-4, but none of them support real-time rendering at this scale.

According to CEO and co-founder Zeev Farbman, the breakthrough “not only because of length, but because of extended sequences being enabled: for the narrative, it “unleashes a new era of generated media.” “It's the difference between visual stunts and scenes,” Ferbman told me in a recent interview. “AI video is not just a demo, it becomes a medium of storytelling.”

LTXV's new architecture streams video in real time, returning the first second almost instantly, and building the rest on the fly. The system uses small chunks of overlapping frames to condition what comes next, allowing for continuity of movement, characters, and actions throughout the sequence. This is the same autoregressive approach that enhances large-scale language models like ChatGPT, which are applied frame-by-frame in visual storytelling.

Last week I saw the demo work on Zoom Call. Most systems, including top models such as the Veo 3, Runway 4, and Kling, wait several minutes for generations. LTX is much faster. The system rendered a 60-second consecutive scene of a woman cooking as the gorilla entered the kitchen and hugged her. The video was streamed when it was generated and there was little pause. In another scene, there was a car passing under the bridge, appearing on the other side before continuing the journey.

Of particular note, LTXV is open source and not locked behind its own API. This model will be available as an open weight for Github and Hugging Face. It's free for individuals and small teams to generate revenues under $10 million. Farbman says this is in line with Lightricks' strategy of “open development for real-world applications.”

From a technical standpoint, the new model is fast and light. It also runs on a single Nvidia H100 or a high-end consumer GPU. In contrast, Farbman points out that public benchmarks for other models often require multiple H100s to create 5-second high-resolution videos.

That meaning goes far beyond YouTube clips. Lightricks is intended for use in advertising, real-time gaming cutscenes, adaptive educational content, and augmented reality performance. Imagine an AR character playing on stage with a musician. Responds to live performances in real time. “This leap turns AI videos into a long form storytelling platform, not just visual tricks.”

It is part of LTX Studio's broad roadmap, and its browser-based production platform that offers scripts to scene authoring, character tracking and style consistency. Multimodal support, including motion capture and audio-based conditioning, will be released soon. Next: 4K video output and seamless frame interpolation for smoother movement.

Farbman quickly admitted that there was still more work to do. “Speed compliance with Longform content is the next big frontier,” he said. “There are dramatic improvements, but scenes with complex interpersonal behavior are still difficult.” Still, what I saw was far beyond what most AI video tools could manage today.

When it comes to monetization, Farbman says Lightricks is in discussion with larger studios and platforms on trading commercial licensing and revenue shares, while still keeping the development of a wider creative community open. “We believe that AI filmmaking is not just for engineers,” he said. “It should be for the storyteller.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *