Adobe Firefly can now generate AI sound effects for videos – and I was seriously impressed

AI Video & Visuals


Adobe Firefly Models can now generate sound effects for video.

Adobe/Elyse improves Picaro/Zdnet

Just a year and a half ago, the latest and best of Adobe's Firefly Generative AI products involved creating high-quality images from text using customization options such as reference images. Since then, Adobe has pivoted to the text-to-video generation, and now has added a number of features to make it even more competitive.

Also: Forget Sora: Adobe launches a “commercially safe” AI video generator. How to try it

On Thursday, Adobe released a series of upgrades to video features, creating more control over the final generation, more options for creating videos, and even more modalities. Creating realistic AI-generated videos is an impressive feat that shows how far the AI generation has come, but it lacks one important aspect of video generation: Sound.

The new release of Adobe is trying to provide creative experts with the ability to use AI to create audio.

Generates sound effects

The new Generate Sound Effects (BETA) allows users to create custom sounds by inserting textual descriptions that are generated. If the user wants more control over what is generated, you can also use your voice to show rhythm, timing, and intensity that the generated sounds follow.

For example, if you want to produce a lion roar sound but want it to match when the subject of the video is open and closed, watch the video, record a clip that will make noise according to the character's movements, and include a text prompt explaining the sound you want to create. You will then be given multiple options to choose, and you will be able to choose the best option for the atmosphere of the project you were aiming for.

Also: Adobe Firefly now generates AI images using Openai, Google, and Flux models – how to access them

Other video generation models like Veo 3 can generate videos using audio from text, but what really stood out about this feature is the amount of controls the user has when entering their own audio.

Before starting up, I had the opportunity to watch a live demo of the feature that was still in use. Looking at how well the generated audio matches the flow of the input audio, it was also really impressive to see how it incorporates a text prompt to create a sound that actually sounds like the intended output.

Generate visual avatars

Another feature that launches in beta is text to the avatar. This is to allow users to turn scripts into avatar-driven videos, or videos that look like living people reading the script, as the name suggests. When you select an avatar, you can browse the avatar's library, select a custom background and accent, and then Firefly can create the final output.

Text-to-Avatar.png

Adobe

Adobe shares that some potential use cases for this feature include creating engaging video lessons with virtual presenters, converting textual content into social media video articles, or giving “human touch”, ah, sarcasm.

Other video improvements

Adobe has also announced practical and simple features that improve the video generation experience for users. For example, users can upload reference videos using the configuration reference to the video and add the configuration to the new generation.

Also: Why Adobe Firefly may actually be the only AI image tool that matters

This is a big win for creators who rely on generated videos. Because no matter how good it is to write a prompt, descriptions often only allow you to describe a part of the visuals you imagine. This will allow less time to explain and help the model understand the goals. Looking at this live demo, the final output looked very similar to the reference image.

Additionally, the new Style Preset option allows users to customize their videos more easily by applying visual styles with the preset tap. These styles include clay, anime, line art, vector art, black and white, and more.

Adobe-Firefly-Style-Presets.png

Adobe

The new Enhance prompt feature within Firefly Web's Generate Video Module helps users get the results they need by adding language to the original prompt, allowing Direfly to better understand their intent.

Also: Chatbot SEO: ADOBE aims to help brands get attention in the age of AI

Adobe has also added a keyframe trimming feature. This will allow users to upload the first and last frames, specify how the image is cropped, add a description of the scene, and then generate a video that will fit in depending on the release.

keyframe-cropping.png

Adobe

Finally, Adobe has improved its Firefly video model to improve motion fidelity. This means that the generated video will move more smoothly and naturally, and will better mimic real-life physics. This is especially important when generating videos of animals, humans, nature, and more.

Also: Adobe's Photoshop AI Editing Magic is finally coming to Android – and it's free

Adobe is also gradually adding models to its video generators, offering users the opportunity to try out different styles from the market in one place. Currently, Adobe is adding Topaz images and videos Upscalers and Moonvalley's Marey to its Firefly board. I'm also adding Ray 2 and Pika 2.2 from Luma AI to generate the video.

Get the top stories of the morning in your inbox every day Tech Today newsletter.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *