Sora can hold back the AI double and can say more about how and where your Deepfake version will appear in the app. This update shows that Openai is in a hurry, and that it actually cares about user concerns.
The new controls are part of a broad batch of weekend updates to stabilize Sora and manage chaos brewing with its feed. Sora is essentially a “tiktok for deep fakes.” This is where you make almost anything 10-second video (including the voice) that includes versions generated by your own or others' AI. Openai calls these virtual appearances “cameos.” Critics call them a false information disaster that looms.
Bill Peebles, who leads Openai's SORA team, said it is now possible to limit how users can use AI-generated versions in their apps. For example, you can stop your AI self from appearing in videos containing politics, stopping you from saying certain words, or stopping you from appearing near hellish seasonings if you don't like mustard.
Openai staff member Thomas Dimson said users could add a virtual double preference, such as, for example, wearing a “#1 Ketchup Fan” ball cap on every video.
Safeguards are welcome, but the history of AI-powered bots like ChatGpt and Claude offers hints of explosives, cybercrime, or biological ages. Someone suggests someone. People are already embracing one of Sora's other safety features: weak watermark. Peebles said the company is “working” by improving it.
Peebles said Sora “follows Hill Climb to make the restrictions even more robust” and “add new ways to keep you in control in the future.”
For a week after the app launched, SORA has been involved in filling the Internet with AI-generated slops. Loose cameo control – For those you approve, or groups like “everyone”, most or “YES or NO” was a specific issue. The platform's unconscious star, other than Openai CEO Sam Altman, has appeared in various ock-lol videos that explain the danger and show him stealing, rapping and grilling dead Pikachu.
