The latest release of Lightricks allows creators to oversee long-form ai creation videos in real time

AI Video & Visuals


Open-Source artificial intelligence pioneer Lightricks Ltd. is raising its stock with the launch of the industry's first long-length AI-generated video model with live streaming capabilities.

The latest version of its flagship LTX video model is said to support video generated by live streaming AI. This is because the output can be improved in real time, as new prompts can be added the moment the user starts creating content.

Additionally, it sets a new standard for video generation in terms of length, allowing users to generate clips up to 60 seconds. This is well above the current industry standard, which averages just 8 seconds.

Lightricks is considered a pioneer in AI video and launched the original LTXV model in February 2024 with the release of professional-grade AI film production tool LTX Studio. The LTXV model was notable for being open source, in stark contrast to competitive models such as Openai's Sora, Runway Inc.'s Gen-4, and Pika Labs' Pika AI 2.1. The subscription-based LTX Studio platform provides a comprehensive tool for editing the output of LTXV, but the base model with its open weight is free to download, and Lightricks invites AI researchers and generative AI video enthusiasts to tweak and experiment.

LTXV also stands out as an ethical model trained with data fully licensed from stock image providers such as Getty Images Holdings Inc. and Shutterstock Inc.

New features in today's release should help LTXV stand out even more from the crowd. Because they allow for attractive new use cases that are impossible with other AI video models.

Today's update focuses on the new autoplay video engine. This not only supports live streaming of generated content, but also allows users to improve prompts on the fly. As Lightricks explained, once the first batch of frames is generated based on the original prompt, the user can enter additional steps to continuously improve the video until it is finally reached. This allows creators to have far more control over video visuals, scene development and characters, leading to numerous new possibilities for AI-generated content.

Lightricks suggests that this could be of interest to video game developers. For example, you can live stream video cutscenes during an online game based on how the player interacts with the game. Meanwhile, live online concerts seen in augmented reality can be covered with AI-generated dancers who move in sync with human performers. It can also support the development of interactive educational videos that evolve based on how learners interact.

As Yaron Inger, co-founder and chief technology officer of Lightricks, stated, “AI video has reached a point where it is not only encouraged, but is truly dictated.

The company said it has integrated the most powerful 13 billion parameter version of LTXV released in May and the new auto-removal structure, similar to the 2 billion parameter model designed to work on mobile platforms.

The new model can be found by hugging faces and Github, along with its open weight, and its streamlined architecture is perfect for individual developers and enthusiasts. According to Lightricks, it can run LTXV on a single Nvidia Corp. H100 graphics processing unit and generate high resolution videos in seconds, or even consumer-grade laptops with relatively low latency.

This is also a big deal, as most unique video generation models require significantly larger computing resources. This means that it can only be efficiently run on cloud-based infrastructure.

Still, the latest updates to Lightricks come when major players in the AI video generation are striving to distinguish between all their products, and their competitors can boast a lot of unique features that are unique.

For example, Google LLC's VEO 3, launched in May, stands out as the only AI video model that can generate its own audio tracks, including soundtracks, character speeches, and animal noise. Meanwhile, another startup called Moonvalley AI Inc. makes interesting moves, mimicking the characteristics of movement that allow people to upload videos of rough seas, for example, to apply the movement to movements like desert dunes and move them like waves.

Moonvalley also claims to be an ethical AI startup, pointing out that its model, Marey, is also trained with licensed content.

Image: Siliconangle/Microsoft Designer

Support open-free content by sharing and engaging with the content and the community.

Join TheCube Alumni Trust Network

Sharing where technology leaders connect, sharing intelligence and creating opportunities

11.4k+

Cube Alumni Network

C level and technology

Domain Experts

Connect with over 11,413 industry leaders from a network of high-tech and business leaders that form their own trustworthy network effects.

SilicOnangle Media is a leader in digital media innovation serving innovative audiences and brands, bringing cutting-edge technology, impactful content, strategic insights and real-time audience engagement. As the parent company of Silicon angle, TheCube Network, Research on TheCube, Cube365, TheCube AI TheCube Superstudios, including those founded in Silicon Valley and the New York Stock Exchange (NYSE). It works at the intersection of media, technology, and AI. .

Founded by Tech Visionarys John Furrier and Dave Velante, Siliconangle Media is a reach of over 15 million elite technical experts and builds a strong ecosystem of industry-leading digital media brands. The company's new and unique TheCube AI video cloud is based on audience interactions. Leveraging TheCubeai.com neural networks, we help technology companies make data-driven decisions and stay at the forefront of industry conversations.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *