The launch of Scope and support for SDXL expands Daydream’s open infrastructure to connect creators, developers, and researchers building and experimenting with real-time video generation.


NEW YORK–(BUSINESS WIRE)–Daydream, the open source real-time AI video and world modeling community hub, today announced two major milestones. One is the release of Daydream Scope, an open source development environment for real-time AI workflows, and the other is support for StreamDiffusion SDXL in the Daydream API and Playground web experience, bringing high-fidelity, low-latency video generation capabilities to creators and developers everywhere. These releases mark the evolution of Daydream as a hub for real-time video generation stacks, connecting models, creators, and infrastructure in one open ecosystem, bringing consistency to a fragmented landscape across tools, models, and communities.
Scope: A new era of real-time AI video development
Daydream Scope is an open-source toolkit that allows developers to build, test, and visualize real-time video and world model workflows locally. It provides a modular interface and allows seamless integration of models for real-time inference, control, and remixing.
“Scope represents the foundational layer of the next generation world model,” said Eric Tang, co-founder of Livepeer, Inc., Daydream’s parent company. “This gives creative technologists and builders a scalable workspace to experiment with real-time AI pipelines, including generative video, virtual production, academic research, and more.”
Scope is currently in community alpha. It already supports LongLive, StreamDiffusionV2, and Krea Realtime 14B, with new models added weekly.
SDXL: A major advance in real-time quality and control
The SDXL release is built on StreamDiffusion’s open architecture, allowing Daydream to merge multiple research tracks into one cohesive, production-ready stack. In addition to today’s SDXL release, key components of the Daydream platform include:
-
Image-based style controls (IPAdapters): Enables dynamic image-driven style transfer in two main modes.
-
IPAdapter Standard for controlling artistic style.
-
IPAdapter FaceID enables consistent character rendering from frame to frame.
-
-
Multi-ControlNet Support: Accelerated HED, Depth, Pose, Tile, and Canny ControlNet provides unprecedented spatial and temporal precision, allowing users to fine-tune multiple parameters in real-time.
-
TensorRT acceleration: Optimized NVIDIA inference ensures smooth playback and consistent performance at 15-25 FPS, even with complex model configurations.
-
For creators who prefer SD1.5, Daydream combines that model with an accelerated IPAdapter to provide high frame rate style transfer and enhanced ease of use.
Already, creative technologists like DotSimulate, creators of the popular TouchDesigner component StreamDiffusionTD, are incorporating the SDXL release of Daydream into their applications. Other developers are building SDXL-based tools using the Daydream API or the open source StreamDiffusion fork of self-hosted Daydream.
About Daydream
Daydream, a product of Livepeer, Inc., is a community hub for open source real-time AI video and world model technology. Daydream provides the infrastructure, research, and tools for developers, researchers, and creative technologists to build, deploy, and share the next generation of interactive AI systems. For more information, please visit https://daydream.live.
contact address
Eric Tan
Co-founder of Daydream (Livepeer product)
[email protected]
