As you know, NAB is not only a great place for industry to meet, but also a great opportunity for companies to showcase something special. Adobe is no exception. After announcing new features in Premiere Pro such as text-based editing and automatic color tone mapping, we also revealed future plans to implement Adobe Firefly in video, significantly expanding our family of creative generative AI models. bottom. What to expect and why does it sound exciting yet unsettling? Read all we know about it below.
First of all. Adobe Firefly is an AI-driven set of tools based on Adobe Sensei, the company’s artificial intelligence and machine learning framework. Firefly was introduced to the world last month and is currently in beta testing. At the moment, you can mainly create images from text prompts (the offer for such generative AI tools is already large, I wrote about Midjourney and Stable Diffusion here as examples). In addition to this, Firefly offers the possibility to generate text effects. This is an interesting toy for designers and illustrators.
At NAB 2023, the company revealed that it plans to bring generative AI to Adobe’s video, audio, animation, and motion graphics design apps going forward. Some of the features already presented include generating music for clips via simple text descriptions and changing the color scheme of videos with a single click. you bet!
Adobe Firefly for Video: What to Expect
The range of concepts Adobe explores in the AI space ranges from storyboarding to final editing. Let’s start with pre-production. Official Press As stated in her release, creators can upload their scripts and let Deep Her Learning engine analyze the text before automatically creating the scribbles for the storyboard. increase. In a video presentation, this process is literally a click of the mouse and he looks like the image below.
That’s amazing? That’s not all. Click the “Create Previs” button again and voila! – Get a simple yet detailed animated pre-visualization of your movie in the timeline.
If Firefly’s functionality is so simple and user-friendly, I think it will be very useful for aspiring filmmakers and independent production companies. At the same time, this direction of development seems to bother me. There has already been much discussion about the potential impact of generative AI on the future of human artists. Some creators believe that big companies might use such tools instead of hiring experts. An example of this is how Netflix used AI to create the art for its animated shorts, justifying it with an “industry labor shortage.”
Useful features to speed up various workflows
This ongoing debate aside, Adobe Firefly can be very useful in speeding up some workflows, especially in the areas of social media and corporate video production. (because that’s where time becomes its best friend, and some processes don’t have time). in full cinematic quality).
As an example, consider the text-to-color extension above. This feature allows creators to change the color scheme, time of day, and even season of their video girlfriend clips almost instantly. To do so, simply create a simple prompt, as in the example below.
Yet another AI-based tool announced not only transcribes interviews, but also identifies keywords, suggests relevant B-roll clips, and creates rough cuts.
Once Firefly can be integrated into other Adobe applications, it will not only be able to generate text effects (as in beta stage), but also animate them. Creating stunning fonts, title cards, graphics and logos is the next step, the developers promise.
Adobe Firefly for Audio
Of course, another area for exploring generative AI is audio. Here, Adobe also introduces some new concepts for his Firefly. First, their presentation shows how creators can ask artificial intelligence to create custom music tracks. There are some keywords.
You can then use the advanced sound effects feature to let AI analyze the content of your video looking for matching sounds. As the developer emphasizes, all generated melodies and tracks are royalty-free so creators can add them to the final cut.
The biggest attraction of Adobe Firefly
This is the biggest difference between Adobe Firefly and other generative AIs (at least in my opinion). The company is committed to training its models only on legal and comprehensive datasets, including Adobe Stock images, open-licensed content, and public domain data with expired copyrights. This is a very new approach and arguably the most conscientious approach compared to other neural network developers.
What this means is that content generated by Firefly is safe for commercial use and solves some of the attribution issues. For example, filmmakers are asking Adobe to give Stock creators the freedom to choose whether or not to use their footage to train the Firefly model (and if so, add to get paid for). that).
How to try out these new AI tools
Adobe Firefly is currently available in beta and anyone can request early access here. I have received your invitation. We hope to show you the results of the first tests soon. But keep in mind that for now you can only try out the Firefly text to image generator and play around with creating different text effects. Adobe plans to introduce all other AI-based features described in this article later this year. Stay up to date with us!
Conclusion
The more I hear the announcement, the more it seems that video production has entered a whole new era. With all of these artificial intelligence tools (not just generative AI, but others that have already taken over mundane tasks), the pre- and post-production processes are much more complex than previously thought. It may be faster. At the same time, it opens up completely different questions that have yet to be answered.
And what do you think about it? Can you imagine implementing the new Adobe Firefly features into your regular workflow? Which are you most excited about? Let us know in the comments below!
Feature image credit: Adobe.