Then Vs. Now: Will Smith’s AI video eating spaghetti shows how far technology has come

AI Video & Visuals


It’s no longer your mom’s AI spaghetti.

In just two and a half years, AI video generation has progressed from struggling to depict Will Smith eating spaghetti to creating lifelike videos.

The unofficial benchmark test began in 2023 when a Reddit user posted a video of the Academy Award winner eating spaghetti, which was generated by ModelScope, a text-to-video AI model.

The initial results were horrifying. Will Smith looked nothing like his movie star self. Instead, it looked like bad animation, with satirical features that would make it look like it was hanging out on a tourist boardwalk. In some videos, he didn’t actually consume the spaghetti and didn’t meet even the most basic premise of the test.

The failure highlights the early limitations of AI-generated videos and images, which could result in people with eight fingers and other anatomical defects.

Smith himself mentioned the test in February 2024, posting a TikTok of himself eating spaghetti in a cartoon-like manner, much like the first video.

As SkyNews and others have recently pointed out, a lot has changed since then.

In 2024, the Chinese AI model MiniMax achieved more accurate representation, but AI Smith’s chewing remained off. And at the end of the clip, the noodles appear to be floating. In May, a user posted on X that he had used Google’s Veo 3 to generate a new video. The problem with this is that the noodles AI Smith chews are too crunchy. Videos produced with the new Veo 3.1 look even more realistic.

OpenAI’s Sora is widely regarded as the best AI video generator on the market. So good, in fact, that shortly after releasing Sora 2 and its accompanying TikTok-like mobile app in September, the company was forced to add guardrails around third-party likeness and copyright following a series of high-profile incidents involving SpongeBob SquarePants and Martin Luther King Jr.

Google and Elon Musk’s xAI are racing to catch up. In July, xAI released Grok Imagine, a text-to-video generator.

Passing the spaghetti test may now be even more difficult as Hollywood and other rights holders step up efforts to prevent AI companies from infringing their rights. Days before Sora 2’s release, Disney, Universal, Warner Bros. and other rights holders filed a lawsuit against MiniMax in federal court.

Cameo, a personalized video company, sued OpenAI over its decision to name the core feature of its Sora app “Cameo.” One of the reasons Sora is able to produce such high-quality videos, especially those of non-public figures, is its ability to allow users to upload facial scans to the app, or the “cameo” feature. In November, a federal judge temporarily blocked OpenAI from using the word “cameo.”

Meanwhile, in Washington, some lawmakers are appalled that AI can now generate videos of people saying words they’ve never uttered before.

Not everyone is avoiding AI video. Coca-Cola recently announced that it is once again using AI to generate holiday ads, this time leveraging Sora, Veo 3, and Luma AI.





Source link