Viral deepfake demo prompts ByteDance to restrict new AI video tool

AI Video & Visuals


Within just three days of release, ByteDance, the Chinese tech giant and owner of TikTok, temporarily restricted the functionality of the new generative video model.

The company announced the changes after a popular content creator demonstrated that it was possible to recreate his voice, office environment, and even the back of his body using just a single photo.

A pre-trial version of the AI ​​model, called Seedance 2.0, entered open testing of Jimeng, ByteDance’s AI model, on Saturday, and was immediately compared to Sora 2, ChatGPT creator OpenAI’s video generation model released last year.

On Monday, online influencer Pang Tianhong, founder of Tech Media Media Storm and known online as Tim, posted a video showing a tool that generates highly realistic scenes based on limited input.

He described the results as “horrifying” and suggested that traditional film production would soon face disruption. “Traditional film and TV production is on the countdown until it is swept away by the AI ​​tsunami,” Pang told video streaming platform Bilibili.

The post quickly garnered millions of views and brought the AI ​​model to the spotlight. Commenters praised the new tool on social media in the country, with short clips showing people playing basketball with LeBron James, a cat fighting Godzilla, and famous battle scenes.

By the time Seedance 2.0 was officially released on Thursday, the topic had garnered more than 70 million views on microblog Weibo, with some users expressing concerns about authorship and copyright protection, and experts warning about potential legal risks.

In response, domestic media quoted Jimeng staff as saying that SeaDance 2.0 plans to restrict the use of references from real people in order to maintain what it calls a “healthy and sustainable media environment.” The system currently blocks direct uploads of celebrity faces and requires users to verify their identity before generating their own content.

Despite the new restrictions, Seadance 2.0 is being rolled out in stages, and while access requires points, it comes with features that can unlock higher-tier memberships, including faster processing, higher resolution, and lip-syncing capabilities.

The tool has also received praise from industry participants. The producer of the popular video game “Black Myth: Wukong” called it the “strongest video generation model” currently available, and Tang, a professional AI-generated content creator and Jimeng’s top collaborator, said the model reduced the production time for a one-minute video from three to four days to about half a day.

“This model represents a qualitative leap in visual understanding, from dialogue and performance to camera movement and effects,” she said.

In an interview with Sixth Tone, Tang, who runs both a tutorial and short AI video account, said Seedance 2.0 is a “double-edged sword.” She added that while the new model benefits her short video account by improving efficiency, it poses greater challenges for her second, more technically oriented account.

“We have benefited from technological advantages and now need a new outlet,” Tan said. “Experience alone will no longer be enough; creators will need a stronger IP and distinct identity.”

At the same time, the model’s realism amplified concerns about authorship, publicity rights, and copyright protection. After an AI-generated fight scene featuring Hong Kong actor Stephen Chow surfaced online, his representatives publicly questioned whether such work constituted copyright infringement.

Xia Lei, a professor at Beihang University’s Institute of Artificial Intelligence in Beijing, told domestic media that such restrictions reflect necessary safety measures. “As technology advances accelerate, it becomes imperative to maintain boundaries against abuse,” he said.

Editor: Marianne Gunnarson.

(Header image: VCG)



Source link