Adobe Direfly introduces a new suite of tools and partner models to streamline AI-driven video production and editing, highlighting the flexibility, accuracy, and creative control of content creators.
Generate new sound effects
The new Generate Sound Effects (Beta) feature allows users to create custom sound effects by entering a text prompt or using audio. The tool is designed to make it easy for creators to add custom audio elements to suit the emotional and atmospheric requirements of the video. According to the company, Firefly hears the timing and energy of the user's voice input, matches the corresponding video action, providing a more cinematic alignment between the audio and visuals.
“Sound is a powerful storytelling tool that adds emotion and depth to your video. Generating Sound Effects (Beta) (Beta) makes it easy to create custom sounds, such as lion roaring sounds and ambient natural sounds. The intensity of the sound listens to the energy and rhythm of your voice and places the sound effects precisely where they belong.
Expansion partner ecosystem
Firefly expands its partner ecosystem to include additional generator AI models. Users can now access Marey from Moonvalley, VEO 3 (audio) from Google, and Gen-4 videos from Runway. This expands the range of creative options for video styles and production without users moving between different applications and workflows. Additional models of Topaz Labs and Luma AI will soon be available on Firefly boards and will generate video.
The company said, “Creative enjoys experimenting in different styles, so it is continuously expanding the models it offers within the Firefly app. Recently, we added the VEO3 with Generate video, along with runway's Gen-4 video and Google Veo3, along with the Firefly board and VEO3. Luma AI's Ray 2 and Pika 2.2 are already available on the board and will be added to generate the video soon.”
Enhanced video controls
Firefly has released advanced video controls that provide the ability for users to direct specific aspects of composition, pacing and style frame-by-frame. The app now supports flexible aspect ratio choices – vertical, horizontal, or squares – streamline the process of creating multiple formats, such as mobile, widescreen, and social content.
Among the new tools is a video configuration reference. This allows creators to upload reference videos and descriptions. In this description, Firefly generates new content that maintains the original visual structure. This is especially useful for reusing content and maintaining consistency throughout the scene. The Style Preset Tool allows users to apply visual styles such as claying, anime, and line art with just one click, and facilitate the setting of the tone of the pitch, brief, or final work.
Keyframe trimming provides an intuitive solution for managing framing transitions. Users set up the initial and final frames and intended crops, and Firefly aims to process video generation and fit the format, making the entire process more efficient without terminating a creative workflow.
Configuration references, style presets, and keyframe trimming are built to provide more control, more speed and more creative freedom. And they're just starting out. Many more enhancements are underway. It helps to push your storytelling even further.
Text to avatar and quick reinforcement
Firefly also launched text in Avatar (Beta). This allows you to click and click to generate an avatar-driven video from the script. This tool offers a library of avatars, customizable backgrounds, and a selection of accents to suit your desired tone or audience.
The company states, “With text in your avatar (beta), you can turn your script into an engaging avatar-driven video with a few clicks. Choose from a diverse library of avatars, customize the background with colors, images or videos, and choose the perfect accent for your video. The tool is used for video lessons, converts written content for social media, and creates internal training materials with virtual presenters.
Recognizing the challenges some users face in creating prompts, Generate Video's new Enhance prompt feature takes user input, makes it clearer, more directional, and reduces the time spent on rapid language improvements.
Commercial safety and creative rights
Adobe claims that all generation AI models within Firefly are trained only on assets that hold appropriate permissions. The company emphasizes that model training aims to respect and protect creator rights will not use user-generated content within Firefly.
User Guides and Best Practices tutorials help users get started quickly and help them optimize creative processes within the Firefly platform.
