OpenAI's new Sora text-to-video generation tool won't be publicly available until later this year, but in the meantime it provides some interesting glimpses of what it can do, including an amazing new video (below) showing the content of TED are doing. The talks could be like they were 40 years from now.
To create FPV drone-style videos, TED Talks collaborated with OpenAI and filmmaker Paul Trillo, who has been using Sora since February. The result is an impressive, if slightly disconcerting, film filled with futuristic conference talks, strange laboratories, and underwater tunnels.
This video once again demonstrates both the amazing potential of OpenAI Sora and its limitations. FPV drone-style effects have become popular for high-impact social media videos, but they have traditionally required advanced drone piloting skills and expensive kit far beyond the new DJI Avata 2 .
Sora's new video shows that these kinds of effects can potentially be opened up to new creators at a significantly lower cost. However, it should be noted that the cost of OpenAI's new tools themselves and who will use them is not yet known. will be available.
What will TED look like in 40 years? #TED2024 collaborated with artists @PaulTrillo and @OpenAI to create this special video using Sora, an unreleased text-to-video model. created. Stay tuned for more breakthrough AI — coming soon to https://t.co/YLcO5Ju923. pic.twitter.com/lTHhcUm4FiApril 19, 2024
But the video (above) also shows that Sora is still far from being a reliable tool for serious movies. The people in the shot are on screen for only a few seconds, and the background is full of uncanny valley nightmare fuel.
The result is an exhilarating experience, but at the same time it can feel strangely off-putting, like landing again after skydiving. Still, we'd like to see more samples in the lead-up to Sora's general availability in late 2024.
How was the video made?

OpenAI and TED Talks didn't go into detail about how this particular video was created, but its creator, Paul Trillo, recently revealed that he was one of Sora's alpha testers. spoke more broadly about their experiences.
Trillo told Business Insider about the types of prompts he uses, including “the combination of words I use to make sure it's more cinematic and less like a video game.” Apparently these include prompts such as “35 mm,'' “Anamorphic Lens,'' and “Depth of Field Lens Vignette,'' which are necessary, or else Sora would say “This very This makes digital-like output a kind of default.
Currently, all prompts must pass through OpenAI, so they can pass through strict safeguards against issues such as copyright. One of Trillo's most interesting observations is that Sora is currently “like a slot machine that asks for something and jumbles up ideas, but doesn't have an actual physics engine.” “about it.
This means that, as OpenAI acknowledged in a previous blog post, we are still a long way from being truly consistent with the state of people and objects. OpenAI said that Sora currently “exhibits a number of limitations as a simulator,” including the fact that it “does not accurately model the physics of many fundamental interactions, such as glass shattering.”
These inconsistencies will likely limit Sora as a short-form video tool for some time, but it's still a tool you'll want to try.
