AutoreGressising streaming architecture provides the power of real-time, long AI video, eight times longer than industry standard
new york, July 16, 2025 /prnewswire/ – Lightricks, the company behind LTX Video (LTXV) and LTX Studios and the global leader in generating AI video innovation, today announced a major evolution in AI video technology. Enables generation of clips that exceed 60 seconds. This milestone makes Lightricks the first company, allowing for long-form AI video creation of live streaming at scale. This breakthrough represents an eight-fold leap beyond the current industry's eight-second standard, and is the first real-time, streaming-enabled long AI video model in today's production. Unlike traditional models that are limited to short output, LTXV's automatic regression streaming architecture continuously generates video in real time, allowing developers and creators to build longer and more consistent stories rather than simply isolated clips.
Like previous LTXV updates, this new feature will remain open weight and will be available to developers, academia and GEN AI video enthusiasts. It also uses commercial products such as LTX Studio and Lightricks' flagship creative development platform to help creators and media teams design, repeat and create video projects using AI.
The new LTXV release introduces an improved autoplay video engine that allows video clips to be live streamed to viewers when they are rendered. The system returns the first second of content almost instantly, allowing for an interactive, continuous narrative with full control over scene development. This paves the way for a new category of generated storytelling applications, from player-generated cutscenes and adaptive educational content to real-time AR visuals synchronized with live performers.
“Beyond the 60-second mark, we unlock a new era of generated media,” he said. Zeev Farbmanco-founder and CEO of Lightricks. “LTXV is unique in its ability to create long scenes while maintaining full control over the extended sequence. This allows for coherent storytelling with visual and semantic consistency, transforming AI videos from demos or random clips into true media with creative intent.”
LTXV's automatic network architecture supports both Lightricks 13b And mobile friendly 2b Parameter model. Creators and developers can apply pose, depth, or canny control Loras not only at the start of the prompt, but also continuously throughout the scene of 30 seconds or more. Compatible with Lightricks' IC-LORA infrastructure, the system also enables real-time near-motion capture feeds, expanding ease of use across an interactive platform.
“We have not only been encouraged by AI video, but we have reached a point where we are truly supervised,” he added. Yaron Ingerco-founder and CTO. “This leap turns AI videos into a long form storytelling platform, rather than just a visual trick.”
Technical highlights:
- Real-time autoregressive sequence conditioning across model variants (video generated in chunks of frames, each chunk conditioning for the next generation). This allows the writer to construct movement and story with smooth continuity, just like the writer creates a story sentence in a sentence.
- Cost and Efficiency: LTXV runs efficiently on a single H100 or consumer grade GPU and offers artifact-free 30-second clips. In contrast, public benchmarks for competing solutions require multiple H100s to generate 1080p clips for 5 seconds, and the computational requirements required to generate 41 seconds at L20, up to 8 H100S) for advanced real-time output.
- Streaming First Architecture: Approximately 1 second returns instantly, with the rest of the stream live
- Supports continuous control input for dynamic scene generation
- Fully compatible with IC-LORA motion and style LORA integration
- Speed – The first second is returned in ~1 second, full 60 seconds in real time (build into the streaming architecture already described)
Important use cases include:
- Advertising and Social Media: On-Demand-Generated 15-60 Second Vertical Ad Spots
- Game: Live rendering cutscenes generated from gameplay data
- Live Event: Stage Synx AR Characters that respond in real time
- Education: Adaptive Explanation Video Evolving with Learner Input
LTXV is available as both open weight models Embracing face (ltx-video) and github (ltx-video) Already fully integrated into Lightricks' flagship storytelling platform, LTX Studio. With a library of models designed for diverse creative needs and a commitment to open development, Lightricks shapes the future of generated AI videos, filling research-driven breakthroughs with real-world applications. Visit Lightricks, its products, technology and open source initiatives www.lightricks.com.
Source Lightricks

