This is where AI advances to the next level. While this isn’t a new generative AI video model comparable to Seedance 2.0 or other models, this new technology from Adobe itself aims to give creators in the AI field a host of new tools to better control their AI-generated videos.
Called ‘MotionStream’, this new experimental technology announced by Adobe will allow creators to interact with AI-generated videos as they create, giving them the ability to ‘direct’ the movement of objects and perform intuitive controls like changing camera angles.
All of this is set to happen in real-time using cursors and sliders, potentially taking AI video technology to the next level. If your interest (or existential crisis) has been piqued, here’s what you need to know.
Adobe introduces MotionStream

Credit: Adobe
Developed by researchers at Adobe, who published their research on MotionStream and are now offering a preview to the public, this new technology isn’t all that dissimilar to the tools and controls we’ve seen in other generative AI models and platforms. runway director mode.
What makes it unique may simply be its performance and intuitiveness. Adobe promises this will be a “big change in how people control video in the future,” said Eli Shechtman, one of MotionStream’s researchers and senior principal investigator.
According to Adobe, the MotionStream experience incorporates natural movement, making it fast and controllable. Most current generation AI tools require users to enter a text prompt, click, and wait tens of seconds or even a minute for the tool to create or edit a video clip. There’s a lot of sitting around and waiting and frustration.
Adobe MotionStream experience

Credit: Adobe
The company hopes this new technology will allow users to interact with AI-generated videos as they are created, allowing them to “direct” the movement of objects and change camera angles in real-time with simple cursor and slider controls.
“This is where a lot of the magic happens. It’s the secondary effects that are very difficult to control manually. For example, if you want to move an elephant, you can click on the body to make it move, but making that movement look natural manually is a pain. This currently requires skill and specialized software to create the rig and animate or keyframe the animation, following a process that can take days, depending on the scope, but typically hours. Instead, MotionStream The underlying video generator behind it essentially simulates the real world.”So the elephant’s legs move naturally and the ears flap naturally as the elephant moves. This model provides knowledge about the world and allows you to interact with it. ” – Eli Shechtman, Senior Principal Scientist and one of the researchers behind MotionStream.
Overall, with lower latency and more control, this could be the future of AI video, making it a little more flexible and usable for filmmakers and video creatives. However, many of the same concerns surrounding generative AI are likely to remain.
If you’re curious or want to learn more morbidly, you can find more information on Adobe’s Research page.
From an article on your site
Related articles on the web
