On 7 May 2026, the EU legislative bodies reached a political agreement on proposed amendments to the AI Law (Agreement). The AI Law Omnibus forms part of the EU’s broader omnibus legislative package aimed at simplifying digital regulation. The agreement clarifies existing AI law requirements, extends compliance deadlines for high-risk AI systems (HRAIS), and introduces new rules regarding intimate content generated by AI. The European Commission (Commission) has also published draft guidelines and codes of practice addressing the AI Act’s transparency requirements.
While the agreement still requires formal adoption, companies providing or using AI systems within the EU should begin aligning their compliance programs with the new framework. This client alert summarizes the key changes, discusses other recent developments in AI law, and outlines practical steps to adapt to the new rules.
Summary of major changes
This agreement does not change the core architecture of the AI Act. This maintains a risk-based approach and common obligations for providers and adopters. However, the agreement proposes several updates that will impact scope, practical application, and enforcement, including:
- New prohibitions targeting so-called “naked” applications that generate potentially harmful intimate content, including child sexual abuse material (CSAM).
- Extended deadline for HRAIS obligations and watermark transparency requirements.
- Remove certain industrial applications from the scope of the AI Act. and
- A streamlined process for bias detection.
The extension of the HRAIS compliance deadline is a significant change in practice. Companies should use these deadlines as the basis for their implementation plans.
The table below summarizes the new deadlines for AI law requirements across the EU.
Prohibited conduct: New ban on AI-generated intimate content
Certain AI practices are prohibited under Article 2. 5 After the AI Law February 2, 2025This includes social scoring, subliminal manipulation, and real-time biometric remote identification in public spaces. The agreement, effective December 2, 2026, extends these prohibitions to “naked” applications, i.e., AI systems that generate or manipulate sexually explicit or intimate images, videos, or audio without explicit consent, or that create CSAM. Providers and adopters may not use or place on the EU market AI systems that are designed to create intimate deepfakes or CSAM, or that lack reasonable safeguards against such use. Violations may result in fines of up to €35 million or 7% of global annual turnover, whichever is higher.
We may also be exposed to civil (collective) claims under EU product liability regulations. To mitigate these risks, companies must anticipate potential misuse during development, take appropriate safety measures, conduct comprehensive risk assessments, and monitor for harmful use.
HRAIS: Extended deadline to comply with the Inclusive Framework
The AI Act imposes stringent requirements on HRAIS, which apply to two categories of AI systems.
- Standalone AI systems that fall under certain use cases as defined in the AI Act. This includes systems used for recruitment and performance evaluation, credit scoring, insurance risk assessment, emotional recognition, biometric identification, and critical infrastructure applications. or
- AI systems that are products or safety components of products that are regulated by certain EU product safety laws listed in the AI Law, such as safety regulations for medical devices, vehicles, and toys.
HRAIS must comply with comprehensive obligations including risk management, data governance, technical documentation, transparency and human oversight, accuracy, robustness, and cybersecurity. The European Commission faced calls to extend the HRAIS compliance deadline, as it has not yet published the relevant harmonized standards and guidance to enable practical implementation within the original deadline set in the AI Act.
This agreement extends the compliance deadline for standalone HRAIS. December 2, 2027. This stand-alone extension does not apply to AI systems that qualify as regulated products or safety components with extended deadlines. August 2, 2028. Importantly, AI systems placed on the EU market before each of these dates will not be subject to HRAIS requirements unless significant changes are made after that date.
This expansion gives companies additional time to finalize risk classifications, build governance frameworks, and prepare technical documentation and monitoring systems. At the same time, companies should remember that their HRAIS may be subject to other applicable legal obligations under the GDPR, particularly in relation to personal data (defined very broadly). EU data protection authorities are already actively enforcing the GDPR in the AI field, including respective rules on data minimization, transparency, and data security requirements.
Transparency obligations: Watermarking deadline extension and new guidelines
This agreement does not change the scope of the existing AI Act transparency obligations under Sec. 50 AI Law. These requirements include:
- Disclosure obligations regarding interactive AI systems, emotional recognition and biometric classification systems, and deepfakes. and
- Watermarking/labeling obligations for AI-generated or manipulated content.
The scope of these obligations is set out in more detail in the European Commission’s recently published draft Guidelines on the Implementation of Transparency Obligations for Certain AI Systems for consultation. These guidelines provide non-binding interpretive guidance regarding Art. 50 AI Act and a Code of Practice on Transparency in AI-Generated Content drafted by independent experts that translates these obligations into practical compliance measures. Both guidelines will be published in draft form for stakeholders to provide comments, with the final version expected to be published in the coming weeks and is expected to closely follow the consultation draft.
Obligations set out in Article. 50 Targets of AI Law August 2, 2026. Under the grandfather rules introduced by this agreement, generative AI systems (i.e., AI systems specifically intended to produce synthetic content, such as text, images, audio, or video) that were placed on the market or put into use before that date must comply with watermarking requirements only at this time. December 2, 2026. Violations may result in fines of up to EUR 15 million or 3% of annual global turnover, whichever is higher.
Further alignment: reducing regulatory duplication and clarifying scope and enforcement
The agreement also addresses a central concern raised during the negotiations, namely the interaction between AI law and existing EU sectoral safety legislation governing products incorporating AI systems.
- Industrial AI carve-out: AI used in industrial applications or products already regulated under the Machinery Regulations will be exempted from the AI Act. Other regulated industrial products and safety components (such as medical devices, toys, lifts, and certain transportation applications) are only required to comply with applicable sector safety regulations, rather than the potentially overlapping requirements of the AI Act.
- Narrow definition of “safety component”: This agreement narrows the definition of “safety component” for HRAIS classification purposes. Relevant regulated products with AI functionality that merely assists users or optimizes performance are not automatically subject to HRAIS obligations unless their failure or malfunction poses a health or safety risk.
- Small business simplification extends to mid-cap stocks: The AI Act’s simplified compliance framework for small and medium-sized enterprises will also be extended to companies with up to 750 employees and annual revenues of €150 million. Benefits include simplified guidance, reduced fines, access to a regulatory sandbox, and standardized document templates.
- Bias detection: This fix makes it easier to use GDPR special categories of personal data (such as health information, biometric data, race, and sexual orientation) when needed to detect and reduce bias in AI models.
Outlook and impact on business
The European Commission has not yet published a tentative agreement. The document will proceed to formal adoption by the European Parliament and the European Council, which is expected to be adopted by July 2026, ahead of August 2, 2026, when the HRAIS requirements come into force.
Extended compliance deadlines will add additional time to implementation. However, given the complexity of the EU’s AI regime, companies should not slacken their compliance and governance efforts. The harmonized standards and guidance required for actual implementation may not be published until closer to the new deadline, leaving limited time to adapt.
The AI Act’s prohibitions on particularly harmful AI practices (e.g. exploitation of vulnerable individuals, social scoring, etc.) are already in place. This will be complemented by new rules on AI-generated intimate content and CSAM, effective December 2, 2026. The chatbot transparency mandate will take effect in August 2026, and the delay for labeling AI-generated content will be just four months (until December 2, 2026). These requirements can result in significant civil liability and, in some cases, fines of up to EUR 35 million or 7% of annual global turnover, whichever is higher.
EU data protection authorities are already enforcing the GDPR regarding AI, imposing fines and banning certain uses of AI systems and models. Companies need to reflect this evolving enforcement landscape in their AI law compliance strategies and broader AI governance programs. Practical steps include building in flexibility to respond to upcoming standards, guidance, and evolving market practices. Proven AI legal readiness is increasingly seen as a competitive advantage and an indicator of credibility in the European market.
