Why creators are moving their AI workflows to PC
AI image and video generation is rapidly evolving. Today, you can get photorealistic images and long, consistent video clips that follow your creative direction amazingly well. Many creators no longer rely solely on cloud tools and web generators. Instead, they run the AI models locally on their PCs.
Running locally has three major benefits:
- Complete control of your assets You don’t have to leave your machine to work.
- No usage or token limits You can repeat as many times as you like without worrying about credits.
- Faster iterations A powerful GPU reduces latency so you can adjust prompts and settings in real-time.
NVIDIA RTX PCs have become the primary choice for this type of workflow. RTX GPUs offer powerful AI acceleration, high VRAM options, and optimizations that allow large models to run more smoothly. With the new open weight model and new RTX optimizations announced at CES 2026, creators can now create faster, higher quality images and videos, especially with GeForce RTX 40 Series and new RTX 50 Series cards.
Introduction to ComfyUI and popular models
One of the most popular open source tools for visual generation AI is ComfyUI. It uses a node-based interface, so you can visually build and customize AI pipelines without extensive coding. Therefore, it is perfect for beginners and power users alike.
To get Windows up and running using your RTX PC:
- Download and install ComfyUI from Comfy.org.
- Start ComfyUI and open the Templates menu.
- Select the Getting Started section and select the Starter Text to Image template.
- Connect the model node to the Save Image node so that the pipeline can output the file.
- [実行]Press and watch the node light up as the GPU generates the first image.
Once it’s working, you can start adjusting the text prompts and rerun the workflow to see how different explanations change the results. From there, the next step is to consider more advanced templates using newer models such as FLUX.2 for images and LTX 2 for video.
As you play around with different models, you quickly run into one of the most important hardware considerations: GPU VRAM capacity. All models have memory usage, with higher resolution and more detailed generations requiring more VRAM. NVIDIA recommends using the FP4 version of the model for GeForce RTX 50 series GPUs and the FP8 version of the model for RTX 40 series cards. These formats reduce VRAM usage while maintaining high performance.
Image generation with FLUX.2 Dev
FLUX.2 Dev is a powerful image generation model designed to work well with RTX hardware. ComfyUI allows you to use and load dedicated templates.
- Open Templates, click All Templates, and search for FLUX.2 Dev Text to Image.
- Select a template to load a complete workflow built from connected nodes.
The first time you run ComfyUI, you will be asked to download the model weights. These are learned parameters that store the knowledge of the model and can be very large, often tens of gigabytes. ComfyUI handles the download and automatically saves the file in the appropriate folder (usually in safe tensor format). Once you set the weights, you can save these settings as your own workflow and reload them at any time.
Proper prompting is essential to getting great results with FLUX.2 Dev. Some simple guidelines:
- Clearly describe the theme, setting, style, and atmosphere in one or two sentences. For example, “Cinematic close-up of a vintage race car in the rain, neon reflections on wet asphalt, high contrast, 35mm photography.”
- Add constraints that are important to your project, such as frames, level of detail, and realism.
- If your image looks cluttered or overcrowded, remove extra adjectives instead of piling them on.
- Instead of overloading with negative messages, focus on what you want.
Once you are satisfied with the image, click ComfyUI.[画像を保存]You can right-click the node to open the file in your browser or locate the file on disk. On Windows, ComfyUI typically saves its output within its installation folder or under the user’s AppData directory. On Linux, the output is typically located in the ComfyUI config folder in your home directory.
Video generation with LTX 2 and advanced workflows
For video, Lightricks LTX 2 is an advanced model that can turn images and text prompts into controllable clips. ComfyUI has a dedicated LTX 2 Image to Video template. Once you have downloaded the LTX 2 model weights, you can feed it an image, add a text description, and generate short video shots that match your creative direction.
The best prompts for LTX 2 are things like exact shot descriptions, rather than full movie scripts. Aim for a few short paragraphs or script-style lines that cover:
- Shot type and scene settings (wide, medium, close-up, lighting, color, mood, etc.).
- Action, characters, visible features, camera movement.
- Audio details such as atmosphere, music, and dialogue with quotes.
You can further refine this by specifying camera movement, pacing and timing, atmospheric effects such as fog or rain, stylistic cues such as film noir or painterly looks, and emotional expressions for your characters. Since LTX 2 is a large frontier model, it can require large amounts of VRAM, especially at high resolutions, frame rates, or sequence lengths. To help with this, NVIDIA and ComfyUI support a weight streaming feature that allows you to offload parts of your model to system memory when you run out of GPU VRAM. This allows larger jobs to run on smaller GPUs at the expense of performance.
A powerful trick is to combine FLUX.2 Dev and LTX 2 into a single custom workflow. You can generate images in FLUX.2 Dev and connect its output nodes directly to the LTX 2 Image to Video workflow within ComfyUI. This allows you to enter one prompt for still frames and another for motion, rendering everything from a single pipeline.
Beyond 2D images and video, NVIDIA is also working on 3D guided generative AI with open blueprints that show how 3D scenes and assets can be used to drive controllable production pipelines on RTX PCs. These resources, along with active communities on Reddit and Discord, give creators a detailed toolbox to experiment, share their work, and receive support.
Recent CES 2026 announcements highlight 4K AI video acceleration on PCs with LTX 2, new RTX optimizations across ComfyUI and other tools, and faster FLUX.2 variants using NVFP4 and NVFP8 formats for up to 2.5x speedups on a wide range of RTX GPUs. Combined with solutions like Project G Assist, which can also manage PC monitor and component settings, RTX AI PC is quickly becoming the perfect creative platform for anyone interested in cutting-edge local AI generation.
Original article and image: https://blogs.nvidia.com/blog/rtx-ai-garage-comfyui-tutorial/
