![Warning footage generated by AI. [Photo source = YouTuber click designer]](https://wimg.mk.co.kr/news/cms/202506/09/news-p.v1.20250609.74c9fe67974c415580dafbdfa8532338_P1.png)
Videos created using artificial intelligence (AI) reach levels that make them difficult to distinguish them from reality, so special attention is mainly required on social media (SNS).
According to the related industry on the 9th, videos that boost awareness of AI-generated videos were posted on social media threads, creating a buzz.
The video begins with a scene in which a broadcast announcer links an onsite reporter with the fast news that he is “melting downtown Seoul.” The reporter appears soon and stands calmly, but the red lava stands behind him.
The journalist says, “The lava you see behind you isn't real. I'm an AI,” he says.
Among netizens who watched the video, the level of video, which is comparable to the real-life level, has been surprising, and there have been concerns up until now that wise AI technology will be exploited. There was also a response that education should be implemented so that older people who are new to AI use can determine whether AI will be created.
Google's video-generating AI model, “Veo,” is used in video, and it satirizes the reality that it undoubtedly embraces content.
Click Designer, the YouTuber who produced the video, told Yonhap News.
Once images generated by AI reach a level that breaks down reality and boundaries, there is also an increasing demand for content indicating whether AI is being used or not. The AI Basic Act, which is scheduled to come into effect next year, requires content services such as movies and dramas created through generated AI to display AI results.
Last month, Naver began offering functions that allow authors to indicate whether to use AI via the “Use AI” sign in content blogs, cafes, Naver TV, and clips.
Some experts acknowledge that determining whether AI images are being generated is not easy, and point out that media literacy training should be prioritized.
Choi Byung-Ho, professor at the AI Institute at Korea University, said, “It is virtually impossible to determine the reliability of AI video production technologies released as open source. Media and non-profit organizations need to identify the source of their content and run campaigns that question everything.”