Hybrid video production – how to make AI part of your workflow

AI Video & Visuals


Hybrid video production – how to make AI part of your workflow

Annoying AI gaffes abound on the internet, and video generators are a hugely controversial topic for a variety of reasons. Many professionals are at risk. They can’t imagine how generative models can help their work. Why would I want a machine to create random, generic clips that are synthetic, creepy, questionable, and don’t include my voice at all?If you ask me, I don’t have an answer. But what if you could keep your voice? And what if there actually were workflows that AI video generators could actually help with? Veteran filmmaker Drew Geraci shows us how to approach hybrid video production with AI from a different perspective.

Drew Geraci is a renowned cinematographer and photographer with over 20 years of experience in the industry, creating some of the most captivating time-lapse and hyperlapse work you’ve probably seen. (For example, the opening credits of “House of Cards.”) Recently, Drew launched his newest MZed course. “Directing the Future: Ethical AI Video for Filmmakers” In this article, I’ll use material from the course to walk you through a workflow for converting still images to video.

Hybrid video production and the role AI plays in it

My colleagues and I have written a number of news articles about AI video generators over the past few years. Sora, Google Veo, Dream Machine, Veo 3, Sora 2… Generation technologies are developing at breakneck speed, so some of them are already very outdated. However, I have never actually used the AI ​​video generator other than to test it to report on its current functionality. why? The reasons vary, but ethical issues are at the forefront. But I also had no idea how the generated AI video would become part of my filmmaking routine. Especially because I love conceptualizing and creating impactful, cinematic images, collaborating with real people, and having complete control over the outcome. Frankly, none of that applies to AI video. Or am I wrong?

In his course, Drew Geraci demonstrates various use cases that provide different approaches to these deep learning tools. Let’s call this hybrid video production. The work seamlessly blends AI-generated elements with live footage without sacrificing the artistic voice. For example, you can create assets based on real photos and then carefully animate and post-process them in After Effects to put together stunning shots.

Another approach is to add motion to still images and combine the resulting clips with live-action footage in post-production. Let’s take a closer look at Drew’s workflow here.

Make a detailed plan before shooting

So, for this demonstration, Drew went to a pool shoot on a sunny day with his models and assistants. For the most part, it was just like any other professional photography or videography job. Manipulate high-resolution cameras (in this case a Sony a7R V and a Sony a1 II), manipulate the lighting, find the best image composition, direct the performance, and shoot from different angles.

However, they knew from the beginning that this would be a hybrid video production, so they also made plans to:

  • Some frames were taken as still images. Other – as a video clip. This was done keeping in mind the flexibility aspects discussed below.
  • Michaela, Drew’s model, wanted to gradually transform into a mermaid at the end of the video. So I tried to find the best perspective to visually support the effect. It turned out to be an overhead shot taken from a drone.
Image credit: Directing the Future Course / MZed / Drew Geraci

After shooting, Drew Geraci went through the usual selection process and edited the photos in Adobe Lightroom to give them the soft, almost divine look he was going for.

Tips for converting still images into clips using Google Flow

So he already has a collection of beautiful still images that he can offer to his clients. Apart from that, Drew decided to use some of them for video editing. To do this, we needed to give the image a bit of movement in order to match it seamlessly with the rest of the footage. This is where AI video generation comes into play.

Drew Geraci uses Google Flow in his course demonstrations. It’s a platform for creatives with an easy-to-understand interface powered by Google’s latest generative models, including Veo 3. (Please note that a trial version is available, but a subscription is required to create more.) Google Flow is one of the most generative tools we know that utilizes text prompts. However, it also includes a feature called “Frames-to-Video”.

Image credit: Directing the Future Course / MZed / Drew Geraci

Using Frames-to-Video, Drew uploads one of his still images and writes a text description of the motion. For the foot close-up above, the prompt was “Static shot. A woman’s feet walk fully forward into the pool with both feet, one after the other in a very delicate manner, with a cinematic look.” As you can see, this is a very elaborate text and contains many precise details. The reason behind this is that it takes several iterations to achieve the desired result. After the first try you will probably get a result that something is wrong. This will guide you in changing the text prompt and what you need to specify. Generally, in Drew’s experience, the more specific the command, the better the results.

So what were the results? Google Flow simply took a photo of Drew, maintained the image composition, lighting, color, and the model’s feet, and guessed the correct movement based on text prompts. It didn’t invent anything and kept the creative vision intact.

Simple VFX transitions

Another tip for using Google Flow shared by Drew Geraci is to try using different AI models for your shots. You can enable older versions of Veo in the roll-down menu, as shown in the screenshot.

Image credit: Directing the Future Course / MZed / Drew Geraci

why? Some might say that newer is more advanced. However, Drew found that the Veo 3’s results sometimes looked too plastic, as if it was trying to smooth out image detail. Also, this neural network can deviate a bit from the established color grading look. So I used Veo 2 for my mermaid makeover idea.

AI-generated still images of video scenes based on real shots. Image credit: Directing the Future Course / MZed / Drew Geraci

It took several iterations to get the legs to look like a mermaid tail. But it wasn’t a difficult process because I planned the effect in advance and chose the right angle.

post production

Before continuing, Drew uses Topaz Labs to upscale the AI ​​generation to 4K. Once all the shots are complete, combine them into one timeline in DaVinci Resolve and edit them into one video. A tip when working with Google Flow results is to experiment with speed. For example, he encouraged underwater sequences to be in slow motion. But if you increase the speed to, say, 150%, it suddenly starts to feel more realistic.

Final editing in progress. Stills from the course, image source: Drew Geraci / MZed

The final clips demonstrated at the course incorporate both AI-enhanced footage and live-action shots, but appear consistent. That’s because they both use the same talent, style, lighting, color palette, and vision. As Drew points out, this can be a short and fun social media clip. Alternatively, if an experienced artist wants to do a more elaborate VFX transition to the mermaid by hand, this clip could serve as a great stylized previs for a pitch.

That’s Drew’s take on hybrid video production and how it streamlines parts of the workflow while preserving the user’s voice and vision. While much of what is being said about AI video generators is negative (for example, they are being used in fraudulent and other harmful ways), this could be a different, more ethical and sustainable approach. Drew is currently working on additional lessons, so stay tuned!

What else can I learn with MZed Pro?

as MZed Pro Memberaccess hundreds of hours of filmmaking education. Additionally, we are continually adding new courses (some are currently in production).

Starting at just $29/month (billed at $349 for the first year, $199 for the second year, or $49/month), you get:

  • Over 60 courses, over 800 quality lessons.
  • Highly created courses by educators with decades of experience and awards including Pulitzer Prizes and Academy Awards.
  • Get unlimited access to streaming all content for 12 months.
  • Download offline and view on the MZed iOS app.
  • Discounts on ARRI Academy online courses, exclusive to MZed.
  • Most of our courses provide industry-recognized certificates upon completion.
  • Purchasing the course outright will cost you over $9,500.
  • Course topics include cinematography, directing, lighting, cameras and lenses, producing, independent filmmaking, writing, editing, color grading, audio, time-lapse, pitch decks, and more.
  • If you decide it’s not for you, there’s a 7-day money-back guarantee (annual billing only).

Full disclosure: Mused teeth Owned by CineD.
Join MZed Pro now and start watching today!

What do you think about this approach to AI video generators? Would you consider hybrid video production as part of your project? How else could you think of making this part of your actual workflow? Share your ideas in the comments below!

Image credit: Directing the Future Course / MZed / Drew Geraci.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *