OpenAI has published two new Sora videos on its YouTube channel, both of which were produced by professional filmmakers and use only video generated by the Sora model.
Sora was first rolled out earlier this year, and after a several-month waiting period, it's still only available to a limited number of filmmakers and creative professionals — a situation that OpenAI says is unlikely to change anytime soon.
One of the new videos sees old black-and-white documentaries in which animals take on unexpected roles, while the other features what can only be described as a neon dreamscape, featuring car washes, walking on clouds and glowing fur.
These clips show that even after the major upgrade to Runway Gen-3 and the release of the Luma Dream Machine, Sora is still better than the current model, but not by a large margin. There are already some great alternatives to Sora.
A longer initial generation means greater consistency and the level of natural movement suggests Sora is more or less an open-world model, but other models are catching up quickly and may reach Sora's level before it's released to the public.
Neon Dreamscape

View
The first video was produced by Tammy Lovin, a digital artist who specializes in 3D and emerging technologies, not filmmaking. All clips in the Sora video were generated by Sora without any additional VFX.
“What I love most about Sora is that it feels like we're creating together,” she said in the video. “It feels like teamwork in the smoothest, most ideal way possible.”
The video jumps from a neon-lit car wash being swept away by waves to a scene of a man walking through clouds to a woman lighting up the beach.
Lavin said Sola has sparked a new creative process, describing it as “magical” to bring ideas that previously only existed in his imagination to life in video form.
“Ever since I was a kid, I would imagine certain things that I see in real life, in different ways, in sort of montages or surreal images, but I've never been able to be a producer or director, so it's never come true until now. So this is like a dream come true.”
Animals in strange places

View
Like the previous video, this next piece comes from someone who isn't a traditional filmmaker: Benjamin Desai, a creative technologist and digital artist who focuses primarily on augmented reality and immersive content.
“I'm excited to share this Sora-powered, imaginative look into an alternate past,” he said in a statement. In the video, Desai blends “the aesthetics of early 20th century cinema with whimsical scenarios and the placement of animals in unexpected roles.”
The video begins with a bear riding a bike and a gorilla on a skateboard. As it progresses, we see a dancing panda, a man riding a dinosaur and a woman riding a giant turtle. It's both moving and unsettling.
“The piece aims to evoke a sense of wonder while showcasing the potential of modern technology,” Desai explains. “Creating with Sora is still an experimental process, involving a lot of iteration and tweaking. It's more of a human-AI collaboration than a magic button solution.”
When will Sora be seen in public?
OpenAI has stopped making suggestions about when Sora might be publicly available, instead talking about what it's doing to get it out there.
Earlier this year, CTO Mira Murati suggested it might be released this summer, but that doesn't seem likely at this point. If it is released to the public this year, it will likely be after the US presidential election in November and may be tied to a major ChatGPT update.
The company says it's now rolling it out to a broader group of professionals beyond just filmmakers, including VFX experts, architects, choreographers, engineering artists and other creative types.
This will “help us understand the capabilities and limitations of our models and shape the next stage of our research to build safer AI systems over time,” the spokesperson explained.
Final thoughts
The video is impressive and continues to show the power of the Sora model, but other tools like the Luma Labs Dream Machine, Runway Gen-3, and China's Kling AI model offer similar rendering quality.
While Sora appears to capture the movements more accurately than any other model, it's only a matter of time before it's cracked, which leaves the question of why OpenAI is being so cautious.