Artificial intelligence systems are classified as high-risk systems, not on the basis of the technologies, algorithms, and models applied, but on the basis of the potential impact of their output on the right to life, the right to privacy, the right to access health care, education, essential services, and personal freedom.
The Department of Science and Technology is inviting public comments on the Prime Minister’s draft decision on the list of high-risk artificial intelligence (AI) systems. This move is expected to establish an important legal basis for the management, supervision and control of risks arising from the development and deployment of AI, while ensuring a balance between promoting innovation and protecting the legitimate rights and interests of citizens, organizations and society.
According to the draft, the list of high-risk AI systems will be created based on risk classification criteria in specific cases to ensure clarity, transparency, and feasibility.

The draft also stipulates that AI systems that are intended for large-scale deployment with widespread impact, whose risks cannot be fully managed under existing regulations, or whose regulatory requirements are uniformly applicable, may also be classified as high risk.
Therefore, the list of high-risk AI systems consists of four groups.
The first group includes AI systems that have the potential to impact human rights, such as systems used to classify, rank, and predict human characteristics, manage employment and labor, or conduct large-scale biometric identification.
The second group consists of AI systems that can impact safety and security, such as systems that ensure the safety of products and goods, systems that protect national security and social order.
The third group includes AI systems operating in critical areas closely related to the public interest. These systems operate or monitor critical infrastructure. Facilitate medical diagnosis, treatment, and allocation of medical resources. Facilitate access to education and assessment of learning outcomes. Assess personal financial risks, loans, and insurance. Assists staff in making administrative decisions and carrying out judicial support activities.
The final group includes AI systems that have large-scale impacts or that can cause consequences that are difficult to remediate. This includes systems that are implemented across two or more states or impact more than 50,000 users, and systems that involve the use of sensitive personal data or data that is subject to high confidentiality requirements.
– (VLLF)
