Interview with AI artist Paul Trillo about his new Sora-Powered music video

AI Video & Visuals


Debuting on May 2nd, music artist Washed Out's latest single adds a little more to the silky chillwave beat. The music video for “Hardest Part” was created by filmmaker and multidisciplinary artist Paul Trillo and is the world's first official generative AI video tool powered by OpenAI's new, not-yet-publicly available generative AI video tool, Sora. This is a commissioned music video project.

As one of the few creators hand-picked by OpenAI to test Sola in its “First Impressions” project, we're working with Trillo and Trillo, who may be familiar names to those following the AI ​​video revolution. We sat down to talk about his new music video. , his prompt-based process, and his thoughts on the future of AI as a creative booster.


Editor's note: The following conversation has been edited for length and clarity.

No Film School: NFS: Paul, thank you for chatting with us, before we get into the process of Washed Out's new music video and this AI revolution we're all a part of, could you give us some background on your background and Can you tell us a little bit about your background as a film director and artist?

Paul Trillo: Well, I never intended it to be a revolution, but it turned out to be a revolution, and I think I ended up at the tip of the spear for a lot of things, for better or for worse. I think part of that comes from a lot of my work always being experimenting with technology, whether it's camera techniques or post-production techniques. It feels like it's not bound by genre or format. We've done everything from Super Bowl comedy spots to dance films, music videos, and video art installations.Memo to future selfAs part of the works of a museum currently located in Madrid. I did something completely narrative. I did something completely abstract. So I've always tried to step out of my comfort zone, try things I've never tried before, and always stay curious.

Well, it led to different projects and other projects. Featured on Without Film School. We did a 10-minute single-shot drone shoot. For the first time, I created a piece using a mobile bullet time rig using a smartphone. And, yeah, it's always been about how what techniques and technologies open up stories and visual concepts that have never been seen before. So I've always tried to lean towards the technology side and discover new kinds of visuals and counter this challenge, the idea that everything has been done before.

NFS: For readers here who may have first heard your name as one of the creators who previewed OpenAI's Sora and shared their first impressions, I was wondering if you could tell us a little more about what you thought of Sora? I knew. The first time I got a chance to experiment.

Paul Trillo: Well, I remember being a little overwhelmed at first and thinking, “Where do I start?” My immediate instinct was to try and break it down. Most things with post effects and cameras try to do that. I think OpenAI was very interested in learning from our processes and didn't tell us much about how to use their tools. At first, I noticed that it had a similar aesthetic to a video game. It looks like a 1990s 3D animation slash stock video.

I felt like I couldn't take aesthetic ownership of something like this. This is always a challenge with AI. They try to preserve their voice and fingerprints during the process when they are inherently limited to what they have trained this to be.

We wanted to see if we could express that from the look of this video game, and if we could make it more tangible. I also wanted it to be as dynamic and dynamic as possible. And, frankly, I was getting a little tired of a lot of the AI ​​work that had been published, which was essentially just a flashy slideshow. These are like his PowerPoint presentations disguised as short films, with very little camera or character movement.

Even if there is movement, it rarely falls apart after more than a few frames. So my instinct was, okay, can we really make this chaotic? How fast can this camera move and what conditions can I use to achieve that? It was a complete guessing game. One of the first tests I did was like a 15 second clip of him that wasn't edited at all. Just to see if I can get some raw output using these whippans, as if continuously zooming through different eras.

And when I did that, I thought, “Oh my god, this is so much more powerful than they're letting me believe.'' Specifically, from the more experimental film side, which I'm more interested in. I saw some weird film effects happening and thought I could do a lot more with this as a tool. And from there, it became a constant search to see what else could be done.

Paul Trillo

NFS: Moving on to this new Washed Out music video, which is probably the world's first (or at least officially commissioned) music video using Sora, how did you come up with this project? Could you tell me a little bit about it?

Paul Trillo: Yes, the timing was perfect. It was quite serendipitous that this video distribution gave me the opportunity to work with her OpenAI and use it in a music video. Originally, Ernest from Washed Out contacted me at the end of January and we were discussing various ideas. I'm always trying to do too many things at once, so I'm starting to get a little anxious about when I'll be able to shoot again. But when I received the go-ahead note from my contact at OpenAI, everything fell into place.

So I saw this as an opportunity to really do something crazy that wasn't possible on schedule and on budget. And that really worked with me noticing that Sora can blend environments in these surreal ways. So I decided to lean into this idea of ​​summoning these AI images, as if to evoke some kind of false memory.

NFS: Can you talk about using Sora and what kind of render times you were dealing with? Also, how much of the project had to be prompted to get what you ultimately see in the music video? was it there?

Paul Trillo: For this project, I think I generated almost 700 clips and used about 55 or 56 of them to create this video. So we calculated that about 10% of this content actually made it into the final video.

In terms of time, Sora generation can take anywhere from 15 minutes to an hour, depending on Sora usage, clip length, and resolution size. Therefore, there is considerable variation when it comes to rendering times. But with 700 clips, you can imagine that it would take more than a day to render. It's multiple days. I worked on this for probably about six weeks. I could have made another video within 6 weeks, but I spent so much of that time on the editing process that I spent many extra days editing this piece. This is because writing the work requires a fluid back and forth between the idea and the actual work. Final creation of the work.

The more you try to open up an idea, the more you end up just filling up your time. The more time you save elsewhere, the more time you spend elsewhere. Even though we all have time-saving technology and all the features our smartphones have to save us time, we still feel really busy. In other words, technology won't give him completely free time back in his day. In the end, you end up just filling up your free time. And I think the same thing applies to the creative process.

Paul Trillo

NFS: Interesting, since most of us haven't yet been able to try out Sora for ourselves, I'm curious to know what the actual process is like the first time you interact with the prompt and try to realize a project like this. I'm curious if it was a thing. Make this your life.

For me, it originally started with these rolling hills that I felt would be difficult to find in real life. I wondered how Sora would handle the surreal landscape of rolling greenery, and I think he handled it better than anywhere else we found. And while we were waiting to hear back from OpenAI to approve everything, we were trying out some initial tests and finally discovered that we could do this kind of fast infinite zoom and infinite dolly movement. This was done in my work before AI.

So this is kind of a go-to, that I ended up doing, to see if there aren't some of these techniques that I've done elsewhere using other tools. I wanted to. Can I do that here? And I thought that might be a great device to tell a story.

I actually came up with the idea of ​​following a young couple over 40 years and zooming in and out over time. The idea was with him ten years ago, but he couldn't really calculate the budget for the music video. So I kind of shelved that idea, and then I thought, “Oh, this could be interesting.” This song is about letting go of someone, moving on, and knowing that you have to live your life without someone. So I wanted to honor the lyrics and why the song was created with the story.

From there, using Sora was honestly liberating because I could throw any idea I wanted to explore, even if it was a bad idea. Because in the creative process, you may find yourself editing, compromising, or filtering out certain ideas. But in Sora's case, I feel like he lacks judgment. It's like testing an idea to see if it works without pitching it to anyone.

NFS: It seems like one of the ways AI could most immediately help filmmakers is through experimentation.

Paul Trillo: Well, I think the ability to experiment and try things is very unique to this tool, and it works best when used in its most experimental form.

Paul Trillo

NFS: Going into more detail, what advice would you give to filmmakers and artists looking to use Sora or AI in general in their projects?

Paul Trillo: The advice I can offer regarding the technical aspects of using this tool is still in flux. But I think it's really great for trying out ideas that others won't let you do, so you don't have to try to get a green light or try to get a budget for something. You can check if something is working without getting anyone's approval. And this may give you the opportunity to try things you wouldn't normally explore.

But I think it also makes sense to know where to draw the line. You don't want to rely 100% on using AI all the time or using AI as a crutch. Because we know that AI can't do everything and it has its limitations. It's still very strange. If your idea is experimental, or conceptually it makes sense to use AI in a higher sense, then it's legitimate.

But if you're struggling with character consistency, or you're like, “Oh, there's no dialogue or anything,” go out and shoot it on camera. If you're hitting a wall of AI limitations, it probably means you're not using AI the right way. Great for making them do things that help them experiment with hallucinations and find happy coincidences that arise from them as glitches. I think they'll be really beautiful and interesting because they're things you can't capture with a real camera.

Finding beautiful errors like this is a great use of this, but I'm not excited to see this replace the entire filmmaking process. Honestly, I think this is a bit boring and you're not getting the most out of this tool.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *