2026 marks a major regulatory turning point for European companies using or considering artificial intelligence in their human resources (HR) processes. Regulation (EU) 2024/1689 on Artificial Intelligence (AI Law) is entering an important implementation stage, with the European Commission’s ‘Digital Omnibus’ package clarifying some obligations and changing certain deadlines.
As announced in our November 24, 2025 AI in the Workplace webinar, we are launching a series of alerts dedicated to AI issues in HR. This first publication provides an overview of current regulatory trends and their impact on the human resources function. Future alerts will specifically consider the following topics:
- Algorithmic transparency and combating bias in HR systems
- AI literacy under the AI Act: Scope and limits of employer training obligations
- Processing personal data at the intersection of GDPR and AI law
- AI-based workplace surveillance: How far can employers go?
1. Regulatory framework: AI law and its risk-based approach
The AI Act, which came into force on August 1, 2024, establishes a harmonized framework at European level for the use of AI systems. Its philosophy follows a risk-based approach.
1.2 Four risk levels
unacceptable risk
AI systems will be prohibited if they pose a serious threat to the EU’s fundamental values. These include:
- social scoring system
- Emotion recognition in specific situations (especially in the workplace and education)
- Exploiting vulnerabilities in specific groups
high risk
AI systems are classified as high risk if they are used in sensitive areas where they are likely to have a significant impact on human rights, such as education, public safety, or human resources recruitment. HR applications have been explicitly identified as a high-risk area. Examples of such high-risk AI applications include, among others:
- Automatic candidate selection
- Performance evaluation
- workplace surveillance
- Turnover rate prediction system
- Decision-making regarding proceeding with or terminating a contract
limited risk
AI systems are classified as limited risk if they can be used safely in accordance with specific transparency obligations. Users must be informed that they are interacting with an AI system, and AI-generated content must be appropriately marked.
Examples include:
- Self-service portal powered by AI algorithms
- HR chatbot
- Virtual assistant for employees
Affected employees must be informed that they are interacting with an AI system.
minimal risk
This category includes all other AI systems that do not fall into the categories above. This includes, for example, spam filters to prevent unwanted email. The majority of AI systems currently in use in the EU fall into this category. Such systems are not subject to specific regulatory requirements, but must comply with other contractual and legal obligations.
1.3 Focus on high-risk systems in HR departments
Many AI tools that are specifically deployed in HR departments should be classified as high risk. Initially, full application of the obligations related to these systems set out in Chapter 3 was scheduled for August 2026. However, this deadline may be discussed within the framework of the omnibus procedure (see below).
1.4 Employer obligations to implement high-risk AI systems
Companies using such systems must adhere to a strict set of requirements set out by the AI Act. This includes, but is not limited to:
Mandatory human supervision
The AI Act requires that high-risk AI systems be designed and used in a way that allows for effective human oversight. This means:
- This supervisory person must be appropriately trained and qualified
- Ongoing training is required to maintain compliance over time
- Supervisors must have the effective ability to intervene and change system decisions
This obligation is different from, but strengthens, the right in Article 22 of the GDPR not to be subject to decisions based solely on automated processing.
Transparency and information obligations
Article 26, Section 7 requires employers to clearly and comprehensively notify, before implementing high-risk AI systems:
- Employee representatives (works councils, trade union representatives)
- Employees directly affected
National regulations regarding consultation of representative bodies must also be observed.
2. Influence of “”digital omnibus“Package
On 19 November 2025, the European Commission announced a “Digital Omnibus” package aimed at revising and harmonizing key EU legislation related to the digital single market. This initiative pursues several objectives: closing regulatory gaps, eliminating duplication, and strengthening legal certainty for businesses, especially small and medium-sized enterprises (SMEs) and small- and medium-sized enterprises (SMEs). Additionally, another legal proposal in the package introduces amendments to the AI Act, which aim to facilitate the smooth and effective application of rules for the development and use of safe and reliable AI.
The omnibus package includes several relief measures for businesses. The most important factors for HR departments revolve around the clarification of AI laws and GDPR. The interaction between the two texts raises many practical questions that need to be clarified.
Additionally, the application deadline for requirements related to high-risk systems will also be changed. The entry into force of the obligations set out in Title III is conditional on the availability of harmonized technical standards and compliance tools developed by European standards bodies, rather than on a fixed date (August 2026). Specifically, these obligations will only apply 6 or 12 months after the European Commission’s decision confirming the availability of the relevant standards, depending on the category of the system. In the absence of such a decision, the deadline would be set at the latest by December 2027 or August 2028, depending on the classification of high-risk systems. According to European Commission projections, this postponement could be up to 16 months, with some key deadlines extended until December 2027.
important: The omnibus package remains a proposal subject to a tripartite consultation process (Council of the EU and European Parliament). Companies should therefore closely monitor developments in the legislative negotiations and continue to prepare for the law to come into force as early as August 2026.
3. Social dialogue: an enduring imperative
Engagement of employee representatives will remain an absolute priority in 2026, even if specific AI law filing deadlines are postponed. AI is seen not just as a tool to facilitate work, but also as a potential threat to job security and working conditions.
3.1 Mandatory consultation
In most Member States, the introduction of new AI systems, particularly in the field of human resources, requires prior consultation with employee representative bodies, both in accordance with Article 26(7) of the AI Act and applicable national law. Ideally, this process should be performed before purchasing an expensive system.
As an example, in Belgium, Collective Bargaining Agreement No. 39 of 13 December 1983 requires that employers who decide to invest in new technologies that have significant collective effects on employment, work organization or working conditions must consult workers’ representatives about these social effects.
Therefore, in view of these issues, employers are strongly advised to adopt a proactive approach by engaging in constructive dialogue with employee representatives well in advance of implementing AI systems.
Developing an AI policy that defines the rules for using artificial intelligence within your company is a smart approach. This document can serve as a common reference and reassure stakeholders regarding the oversight of these technologies, while demonstrating the company’s commitment to the ethical and responsible use of AI.
4. Practical recommendations
This rapidly evolving regulatory landscape requires companies to:
- map Classify all current and future AI systems according to the risk categories set out in the AI Act and identify those that are particularly risky
- train HR and IT teams on the joint requirements of the AI Act and GDPR
- Perform an impact assessment Before introducing a new AI tool
- establish Interact regularly with employee representative bodies to anticipate concerns and build trust.
- actively monitor Development of the debate on the omnibus package at EU Council and European Parliament level
If you have any questions about implementing AI in your company, please feel free to contact our team.
