I used the new Sora video app and it's gone.

AI Video & Visuals


I'm not one to view the work of Sam Altman and OpenAI as existential threats. Every generation faces the challenge of integrating new technologies into society without figuring out the whole thing. That is our challenge. That's AI.

I've always thought of AI as largely harmless. It's useful for minor tasks, but it's unreliable for mission-critical stuff, and I don't think it's likely to conquer the world any time soon.

However, that doesn't mean it's completely harmless, and the new Sora video generation app is a perfect example of that.

I spent the weekend making a bunch of silly videos, including showing my black cat, Xavi, how to use a BlackBerry.

It's all fun and games until you realize that AI-generated videos involving real people are getting better and you need to do something about it.

I was curious so I tried it

Sora is not yet available to everyone

Bob Ross draws cats with Stephen Radchia on the Sora app

After seeing some mentions online, Sora iPhone 17 Pro Max. I didn't have the invitation code, so I couldn't get in right away. But when I left it installed and checked again a week later, it was working fine.

I started by creating a cameo appearance for myself. The process is very simple. Look at the selfie camera, say the three numbers that appear on the screen, then turn your head in several directions.

I was shocked by the high quality. I asked Sora to make some videos using my likeness.

There were obvious mistakes, and the technology wasn't perfect. However, this is great for a consumer product that takes just a few minutes from prompt to output.

I'll admit it's a lot of fun. Who wouldn't want a picture of Bob Ross holding a black cat?

It's a playground for the mind and it feels great to tap into raw inspiration and bring your vision to life within minutes. Unfortunately, it doesn't take long to realize the dangers involved.

Videos are popping up everywhere

A black cat being shown how to use a BlackBerry through the Sora app

To be fair to OpenAI, Sora apps are not lawless. Several safety measures and guidelines are in place. My cameo appearances can only be used by others if I give them permission, and they can see drafts of anything created using my likeness.

There's also a rule: If you try to create a video that includes live people, the app will throw a content guidelines error.

We don't allow harmful content, and transcripts created from audio are scrubbed to ensure they don't violate our policies.

The created content contains visible and invisible watermarks. OpenAI It claims to be able to track videos with high accuracy.

Each video has embedded C2PA metadata that helps distinguish between AI-generated videos and genuine content.

That's all well and good, but it doesn't help protect me while I'm mindlessly scrolling through social media.

The proliferation of Sola-made videos on traditional social media platforms like Instagram and TikTok is alarming, as it becomes increasingly difficult to trust what we see.

Sure, the video of George Washington fighting Abraham Lincoln in a cage match is clearly fake, but it's becoming increasingly difficult to tell what else.

AI can still get small details wrong, like the proper keyboard layout on a computer or the number pad on a phone, but it's more convincing when it comes to human speech and movement.

It doesn't take long until Sora becomes a little too precise

Safeguards are not very helpful

Relive the Computer Chronicles with the Sora app with Stephen Radochia

While it's nice to have guidelines and restrictions, it's hard to believe there's no way around them.

If this is the level of output we can generate for free, it makes me cringe to think of what could happen (and is already possible) with more powerful systems.

When you're comfortable browsing social media, you don't stop to check metadata. Yes, it can help solve a bigger problem and prevent world leaders from starting wars over fake videos.

But these safeguards don't mean much to the average person.

How often does a front-page article make a mistake and later print a retraction? Can a retraction completely erase what people heard or saw in the first place?

Most people don't even pay attention to whether the video turns out to be an AI-generated version of the real thing. There will always be a certain percentage of people who will believe or be fooled by the videos that are created, and that is the problem.

We will need more comprehensive protocols to protect ourselves. Unfortunately, I don't think there is a satisfactory answer other than using common sense and being more cautious about what you see.

It's not all bad news

New technology has legitimately beneficial uses. Thank you for helping educators make learning more engaging and interactive.

How great is it that teachers can transport their students to ancient Rome within minutes?

These tools help architects and artists visualize designs faster. It's also a good thing that small businesses no longer have to spend thousands of dollars on a simple advertising campaign.

While I hope we are aware of the significant responsibilities that come with such powerful technology, I am not entirely convinced that we are.



Source link