The Artificial Intelligence (AI) Act was published today (12 July) in the Official Journal of the European Union, the official official journal of the European Union. The Act confirms a three-phase schedule for the implementation of various provisions depending on the risks associated with the aspects of AI systems that are being regulated. The final deadline is 2 August 2026, but the first deadline is less than three weeks away on 2 August, at which point the “General Provisions” and “Prohibited Activities” and rules on systems will become binding for EU member states.
These are addressed in Chapter 1 (pages 48-51) and Chapter 2 (pages 51-53) of the 150-page document. Furthermore, Article 113 of the document sets a deadline of August 2, 2025, one year later, for regulatory provisions on “notifying authorities” regarding “high-risk AI systems” (Chapter 3, Section 4, pages 70-76), the establishment of an AI Office and AI Committee (Chapter 7, pages 95-100), and the enforcement of “penalties” for non-compliance (Chapter 12, pages 115-118). One year from August 2026, the full scope of the AI Law will apply, with some exceptions, such as retroactive application to AI products already on the market. The entirety will not be finalized until August 2, 2027.
The new AI law, which will influence AI policies around the world, sets a common regulatory and legal framework for the development and application of AI in the EU. It was proposed by the European Commission (EC) in April 2021 and passed by the European Parliament last month (May 2024). Other countries are pursuing their own versions, but the EU model is expected to become a template for these as well. “That's the Brussels effect,” Dan Necita, MEP and Chief of Cabinet of Dragoš Tudrače, responsible for shepherding the AI law through “so many” voting rounds, told the Digital Enterprise Show last month.
“Just like with GDPR, we decided that this is how we should protect personal data. GDPR is not perfect, but it has a global impact. The AI law will be the same,” he said. The law takes a “risk-based” approach: the higher the risk of harm to society, the stricter the rules. It is presented as a corporate tool with democratic objectives, regulating original providers and professional users rather than giving rights to individuals. The most controversial measure is the handling of facial recognition technology in public places, which is classified as high risk but not banned. Amnesty International argues that the general use of facial recognition should be banned.
The AI Law establishes four risk levels: unacceptable risk, high risk, limited risk and minimal risk, with an additional category for general-purpose AI. Of the four official risk classifications, applications in the first group (“unacceptable risk”) are banned, while those in the second group (“high risk”) must comply with security and transparency obligations and also undergo suitability testing. Limited risk applications only have transparency obligations, while minimal risk apps are unregulated. “The majority of the regulations apply to AI systems that have a very significant impact on the fundamental rights of human beings,” Nechita explained last month.
In particular, this relates to the use of AI in employment decisions, law enforcement and immigration – places where “the use of AI can lead to discrimination and ultimately to people being jailed or denied work or social benefits.” He said: “And these are all high-risk cases. [at the top of the pyramid]”At the bottom of the pyramid, medium risk cases are AI systems that manipulate or influence people, such as chatbots and deepfakes. This law: [in those cases] We need transparency. AI says, “I'm an AI. I'm not your psychologist.” And about 80% of the other AI systems out there, [are categorised as low-risk].”
The AI Law provides for the creation of various new institutions to foster cooperation between member states and ensure compliance with the regulation across the bloc. These include a new AI Office and a European Artificial Intelligence Board (EAIB). The AI Office will be responsible for “supervising the very large players building very powerful systems at the cutting edge of AI”. The EAIB will be made up of one representative from each member state and tasked with ensuring consistent and effective application across the EU. These two institutions will be complemented by national-level supervisory authorities banding together as a new Advisory Forum and Scientific Panel, which will provide a range of guidance from the business and academic sectors, as well as civil society.