We've found that there's a drawback to supplying your vibe to AI-driven text-to-video generators. One completely predictable drawback is that the user base removes control of your portrait and creates a digital double crime deepfake.
A portrait of Openai CEO Sam Altman can be found throughout the video announcing the release of the Sora 2. The trailer in question itself was entirely made with SORA 2 and was located in an AI-generated studio that ALTMAN was an impossible neon colored forest, and even an oversized duck race event. However, according to Openai researcher Gabriel Petersson, the most popular AI-generated clips in the SORA app are reportedly manufactured surveillance video showing Sam Altman stealing a graphics card.
In addition to fresh additions of AI-generated audio, the SORA 2 shows a clear improvement in the first model released last year. That said, it takes some comfort to the fact that you struggle to render your text accurately. In addition to the wibbly character on the security guard shirt, Sam Altman's digital double can be seen swiping through stock from a display labeled “Gratics Card.”
The concept of Altman Stealing Graphics Cards is pretty interesting just because it mimics what's going on on an industrial scale to support Openai's real data centers. There is 10 gigawatts of Nvidia chips heading for Openai in the near future, despite the purchase and payments made by Openai investors, including Nvidia. How about that? !
I have my favorite video on Sora 2 now, I enjoy this short moment.September 30, 2025
Setting the deepfake of criminal behavior on the side is also a bit of concern that Openai would choose to showcase Sora 2's abilities with an AI-generated depiction of an equally ridiculous and dangerous stunt. No, I'm not trying to create moral panic on that one clip. There, the man rides two horses at once and falls into his face, not seem to be bad due to the wear of the cartoon characters.
What I'm worried about younger viewers might try to emulate is a clip of a guy doing a backflip on an open water board. The SORA app was launched using parent controls via ChatGPT. These tools allow parents to select their teenage accounts into “personally depersonalized feeds.” [choose] Whether their teens can send and receive messages directly, and [offer] Ability to control whether there is a constant feed of content while scrolling. ”
However, the SORA 2 clips do not have a watermark that indicates that AI is being generated. For example, these clips may look realistic, but there is no disclaimer warning that they are not based in practice at all. A recent Microsoft study suggests that people struggle to identify 62% of 62% of their time, so they worry about how SORA 2 clips can spread disinformation and be leveraged by harassment and bullies.

Best gaming monitor 2025
