
Artificial intelligence or depiction of AI / Yonhap News
South Korea on Thursday formally enacted a comprehensive law governing the safe use of artificial intelligence (AI) models, becoming the first country in the world to establish a regulatory framework against misinformation and other dangerous effects surrounding the emerging field.
According to the Ministry of Science, the Basic Law on the Development of Artificial Intelligence and Establishment of Reliability Foundations (AI Basic Law) officially entered into force on Thursday.
This is the first time a government has adopted comprehensive guidelines for the use of AI on a global scale.
The law centers on giving companies and AI developers greater responsibility for addressing deepfake content and misinformation that may be generated by AI models, and gives governments the power to impose fines and launch investigations into violations.
In detail, the law introduces the concept of “high-risk AI,” which refers to AI models used to generate content that could have a significant impact on users’ daily lives or safety, such as applications in the employment process, loan reviews, and medical advice.
Entities that utilize such high-risk AI models must notify users that their services are based on AI and are responsible for ensuring their safety. Content generated by an AI model should be watermarked to indicate its AI-generated nature.
A ministry official said, “Applying watermarks to AI-generated content is the minimum preventive measure to prevent side effects caused by misuse of AI technology, such as deep fake content.”
Global companies providing AI services in South Korea will be required to appoint a local representative if they meet the following criteria: annual global revenue of 1 trillion won ($681 million) or more, domestic sales of 10 billion won or more, and daily domestic users of at least 1 million.
Currently, this includes OpenAI and Google.
Violations of the law can result in fines of up to 30 million won, and the government plans to provide a one-year grace period when imposing penalties to allow private companies to adapt to the new rules.
The law also includes measures for the government to promote the AI industry, with the science minister required to present a policy blueprint every three years.
