OpenAI's latest Sora video is a quick compilation of innovation, conversation, and red tips as the company uses its flagship video product to promote the new season. TED talk.
The action is a bit nauseating, with a rollercoaster ride through labs, factories, and lecture halls, ending with a shot of someone giving a lecture on stage.
It aims to promote a new season of TED Talks focused on artificial intelligence by exploring what TED will be covering in 40 years' time.
This is the latest Sora release from professional video producers rather than the OpenAI team itself, following a nature documentary, a music video, and a short film about the Bubblehead Man.
How to create videos using OpenAI Sora
What will TED look like in 40 years? #TED2024 collaborated with artists @PaulTrillo and @OpenAI to create this special video using Sora, an unreleased text-to-video model. created. Stay tuned for more breakthrough AI — coming soon to https://t.co/YLcO5Ju923. pic.twitter.com/lTHhcUm4FiApril 19, 2024
Currently, Sora is a closed system, so only a small number of OpenAI-approved artists and creators can create anything using Sora. This situation is expected to change later this year as OpenAI looks to integrate Sora into third-party tools such as ChatGPT and Adobe Premiere Pro.
TED Talks was created using Sora by LA-based director Paul Trillo. In the final he created a clip of 1:33, the text he said from the prompt he created a clip of 330 or more and that he needed to edit it.
The final video consists of a total of 25 clips, all created by Sora. Everything except the TED logo was generated by Sora, and all motion and individual shots were generated by AI.
Trillo said: “It's really fun to use this new tool to explore techniques we've done in the past. It generates a lot of new ideas.”
This is a sentiment expressed by many of the creators who were given early access to Sora, suggesting that this could lead to entirely new ways to tell stories visually.
What can you see in the new video

The video begins with what appears to be an explosion, and the camera rapidly zooms forward toward the explosion, beginning a “through the looking glass”-like journey of discovery.
Next, fly over several cities and enter different types of buildings. We first see someone giving them alcohol, and the zoom continues to the factory, experiments, etc.
Every few scenes, another person gives a lecture on a red background, presumably designed to simulate a TED talk, before further shots of experiments and research.
A compelling and well-made video with music by Jack. This is a clear demonstration of what is possible with generated AI video when placed in the hands of artists, and in my opinion, AI video will not replace creative, but rather unleash a new era of creativity. This further supports this idea.
