
The video, which falsely claims that lava is flowing in downtown Seoul, has become a virus and has raised new concerns about the credibility of content generated by artificial intelligence.
Posted on a thread on social media platforms on Monday, the video begins with a news anchor providing broken news that “lava is erupting downtown Seoul.” The broadcast is then cut to a field reporter standing in front of what appears to be playing with red lava.
However, the reporter quickly reveals, “The lava behind me is not real. I'm an AI.” Several other characters (students, celebrities, business people) also appear in the video, each saying that AI is being generated, warning viewers not to fool their appearance.
This video was created by YouTuber Ddalgak using Google's generative AI model Veo 3. This meets the way that people can easily accept content without question.
In an interview with local media, Ddalgak said she was inspired by a case of a Korean woman being scamned by an AI-generated video from Tesla CEO Elon Musk. “It was shocking that even low-quality AI technology can deceive people,” he said.
As AI-generated videos become increasingly blurring the line between fiction and reality, there is an increasing number of calls for clearer labeling of such content. Under the basic acts of Korean artificial intelligence, which are scheduled to take effect in January, content generated using generative AI, including films and dramas, must be clearly marked as AI generation.
In a related move, Naver, Korea's largest search engine, launched its functionality in May, allowing creators to label AI-generated content on the platform, including blogs, cafes, Naver TVs and video clips.
shinjh@heraldcorp.com
