Google targets filmmakers with new generative AI video model Veo

AI Video & Visuals


It's been three months since OpenAI demoed its compelling text-to-video AI, Sora, and now Google is looking to steal some of the attention. Announced at his I/O developer conference on Tuesday, Google said Veo (its latest generation AI video model) will be available in a variety of visual and cinematic styles in “high-quality” 1080p resolution. We announced that we can generate videos.

According to Google's press release, Veo has “advanced natural language understanding,” allowing the model to understand film terms like “time-lapse” and “aerial landscape photography.” Users can use text, image, or video-based prompts to direct the desired output, and Google says the resulting video is “more cohesive and coherent,” with people, animals, and more visible throughout the shot. , the movement of objects is depicted more realistically.

Below are some examples, but please ignore the low resolution if you can. I needed to compress a demo video into a GIF.
Image: Google

Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be narrowed down with additional prompts, and that Google is adding new tools to help Veo create storyboards and longer scenes. He said he is considering additional features.

As with many of these AI model previews, most people looking to try Veo for themselves will have to wait a while. Google is experimenting with a model in which it invites selected filmmakers and creators to determine how it can best support creatives. He said he would build on these collaborations to ensure that the

Here you can see how the sun is accurately recreated behind the horse and how the light shines softly through the horse's tail.
Image: Google

Some Veo features will be available for “Creator of Choice” in private preview within VideoFX in the coming weeks. You can sign up for the waitlist here for an early chance to try it out. Otherwise, Google also plans to add some of its features to YouTube Shorts “in the future.”

It's one of several video generation models Google has created over the past few years, ranging from Phenaki and Imagen Video, which create grainy, distorted video clips, to the Lumiere model announced in January of this year. is. The latter was one of the most impressive models we'd seen before Sora's February announcement, and Google says the Veo will be able to understand video content and understand real-world physics. The ability to simulate and render high-resolution output has been further improved.

Meanwhile, OpenAI has already pitched Sora to Hollywood, with plans to make it publicly available later this year, and previously hinted in March that it could be ready in “a few months.” The company is already looking into incorporating audio into Sora, potentially making the model available directly within video editing applications such as Adobe's Premiere Pro. Given that Veo is also being pitched as a tool for filmmakers, OpenAI's head start could make it difficult for Google's project to compete.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *