Adobe Firefly adds new AI video editing tools, partner models, and browser-based editor

AI Video & Visuals


Adobe Firefly adds new AI video editing tools, partner models, and browser-based editor

Adobe has expanded its Firefly creative AI platform with new video editing features, third-party model integration, and a lightweight, browser-based video editor designed to move beyond “prompt roulette” to true creative control. This update introduces Runway's Aleph model for precise clip editing, Topaz Astra for resolution upscaling, and Black Forest Labs' FLUX.2 for enhanced image generation.

For filmmakers and content creators who have followed the steady development of Adobe's AI-powered tools, this update marks another step in the company's strategy to position Firefly as an integrated creative hub, rather than just a text-to-video generator. This addition addresses two persistent criticisms of generative video tools: the lack of precise editing control and the need to move between multiple platforms to achieve professional results. Adobe's approach allows users to choose the right model for each task while keeping everything within a single interface.

At CineD, we've covered Adobe's Firefly video features extensively, from the Firefly Video Model's initial announcement to the launch of its public beta earlier this year. The platform has evolved significantly since its early days, and this latest update continues that trajectory by focusing on workflow improvements rather than just raw generation capabilities.

Prompt to edit Runway's Aleph model

The most important addition for video professionals is the new Edit Prompt feature, which integrates Runway's Aleph model directly into Firefly. Rather than having to regenerate an entire clip from scratch when adjustments are needed, creators can now upload existing footage and use natural language instructions to make targeted edits. The workflow is simple. Generate videos in Firefly and adjust them using specific editing commands powered by Aleph.

Runway introduced Aleph in July 2025 as an in-context video model capable of multitasking visual generation. This model can add, remove, and transform objects in a clip, generate new camera angles, change lighting conditions, apply style transfers, and handle a range of post-production tasks that traditionally required manual compositing. According to Runway's documentation, Aleph analyzes uploaded footage to understand scene context and maintain temporal consistency between frames before implementing changes.

For Adobe, this partnership represents a move away from relying solely on an in-house model. By integrating Aleph, Firefly gains advanced video manipulation capabilities that complement existing generation tools. This combination allows creators to generate initial content using Adobe's commercially safe Firefly Video Model and then apply more targeted edits through Aleph without leaving the platform.

new camera motion control

Adobe also introduced enhanced camera motion controls that give filmmakers more say in how the virtual camera moves within the generated scene. This workflow uses a reference-based approach. That is, upload a starting frame image along with a video clip showing the camera movement you want to recreate. Firefly applies that motion pattern to new content.

This solves one of the fundamental challenges with AI video generation, where camera movements often feel random or disconnected from cinematic conventions. By using real footage as a motion reference, creators can pull from their own shot library or existing productions to guide the AI's output. It's not exactly the same as manual keyframing, but it's much more predictable than text prompts alone.

Topaz Astra integration for video upscaling

Resolution limitations have been a consistent pain point for AI-generated videos. Most tools output at 1080p or lower, limiting their usefulness in professional pipelines where 4K or higher delivery is the norm. Adobe is addressing this issue through integration with Topaz Labs' cloud-based video upscaler, Topaz Astra.

Topaz Astra video upscaler now integrates directly with Adobe Firefly. Image credit: Adobe

Astra uses its Starlight AI model to upscale footage to 1080p or 4K while increasing detail and clarity. Unlike simple resolution scaling, this tool offers two different modes. Precise mode cleans up artifacts while preserving the original look, while Creative mode allows you to “imagine” new details in your footage. Especially for AI-generated content, Creative mode can help smooth out obvious discrepancies that often occur in composite videos.

This integration allows Firefly users to generate clips at the platform's native resolution and push them to broadcast-ready quality without exporting to another application. For creators working with old or low-quality source material, Astra can also help restore clarity and reduce noise, essentially acting as both an upscaler and restoration tool.

Black Forest Labs' FLUX.2 joins Firefly

On the image generation side, Adobe is adding Black Forest Labs' FLUX.2 to Firefly's growing list of partner models. FLUX.2 is the latest version of the FLUX model family, with significant improvements in image quality, fast tracking, and multi-reference consistency.

Within Firefly, FLUX.2 generates and edits images at up to 1 megapixel resolution with flexible aspect ratios. This model offers what Black Forest Labs says is photorealistic detail with advanced text rendering capabilities. This is an area where many AI image generators still struggle. In particular, FLUX.2 can work with up to four reference images simultaneously, allowing you to achieve text and style consistency across multiple generations.

Black Forest Labs' FLUX.2 is available for Adobe Firefly. Image credit: Black Forest Labs

This model is available in Firefly's Text to Image module, Edit Prompt feature, and Firefly Boards. Adobe is also bringing FLUX.2 to Photoshop desktop immediately, with Adobe Express integration scheduled for January. This cross-platform availability reflects Adobe's broader strategy to make powerful AI models accessible wherever creative work happens.

Firefly video editor enters public beta

In addition to new models and features, Adobe is releasing a dedicated Firefly video editor in public beta. Described as a lightweight, browser-based assembly space, the editor allows creators to combine AI-generated clips with personal footage and audio in a streamlined interface.

This tool is not intended to replace a full-featured NLE like Premiere Pro. Instead, it serves as a finishing environment for AI-generated content, allowing creators to assemble generated clips into a coherent sequence without having to switch to heavier software. Think of this as the final step in your AI-first workflow. Generate footage, apply editing and upscaling, and cut it into a simple timeline before exporting or handing it off to more advanced editing tools.

Firefly Video Editor is not a replacement for Premiere Pro, it's just the first step after production. Image credit: Adobe

Unlimited generations until mid-January

As a promotion, Adobe is offering Firefly plan subscribers unlimited image and video generation until January 15th. This applies to all Firefly and partner models of image generation, and more specifically to unlimited video generation using the Firefly video model. For creators who have allocated Generation Credits, this window provides an opportunity to experiment with the platform's expanded features more freely.

A complete picture of your professional workflow

Adobe has structured this update around key insights. That means creators don't want to be tied to a single AI model or forced to jump between different tools to complete their projects. Adobe's unique commercially secure model, combined with industry-leading partner tools like Runway, Topaz, and Black Forest Labs, creates what the company calls a practical end-to-end creative workflow.

This approach is particularly relevant for professional users who require both creative flexibility and legal clarity. Adobe has consistently emphasized that Firefly models are trained on licensed content and the assets generated are safe for commercial use without the copyright ambiguity of many competing tools. By hosting partner models within the same environment, Adobe extends some of the workflow integrations while giving creators access to specialized features as needed.

It remains to be seen whether this multi-model strategy will satisfy professional cinematographers. These tools are great on paper, but real production demands a level of precision and predictability that generative AI still doesn't consistently deliver. Still, for pre-visualization, concept development, and certain types of content creation, Firefly's extended toolkit can be a truly useful resource.

What do you think about Adobe's approach to integrating third-party AI models into Firefly? Are you planning to test the new edit prompt feature with Runway's Aleph model? Feel free to let us know in the comments below.





Source link