table of contents
table of contents
Ask the chatbot
The best question
What is the purpose of AI (AI) law?
What is the purpose of AI (AI) law?
When was the AI Act officially adopted?
When was the AI Act officially adopted?
What are some prohibited uses of AI under the AI Act?
What are some prohibited uses of AI under the AI Act?
How does the AI Act classify AI systems by risk?
How does the AI Act classify AI systems by risk?
Why did high-tech companies oppose the AI law?
Why did high-tech companies oppose the AI law?
Artificial Intelligence Law (AI Law)laws that seek to improve the experience, privacy and safety of EU citizens when using European Union (EU) artificial intelligence (AI). The law places restrictions on businesses and other entities that use AI to share or collect information, and is intended to help EU citizens avoid discrimination.
The AI Act was released along with other initiatives designed to improve the way companies use AI. For example, in January 2024, the European Commission (EU Executive Arm) launched an AI innovation package designed to support start-ups and mid-sized enterprises by providing supercomputing infrastructure to improve the way in which such business models are trained. The plan also aims to use the AI factory, a supercomputer around Europe that is working on AI models.
Background, timeline, range
The first proposal to improve AI regulations in the EU was announced by the European Commission in April 2021. After three years of deliberation and revision, the Council of Europe (comprised of the heads of each EU member state) adopted the AI Act on May 21, 2024. Their practice.
The AI Act applies to entities that create or use AI in their business. These entities include providers such as Openai (which developed the ChatGPT Generation AI model). Deployers or companies using models such as ChatGPT or AI chatbots. Importers are entities that bring AI technology to the EU from elsewhere. Although this law applies only to countries within the EU, it is subject to similar laws in more than a dozen US states, including South Korea and Brazil, as well as Illinois, California, Colorado and New York.
Terminology and key principles
The AI Act presents a variety of risk layers based on how AI is used. AI systems that pose “unacceptable risks” are prohibited. Some of the terms that these systems violate include:
-
AI cannot be used to manipulate or deceive users. For example, information generated in AI that has not been fact checked can lead users to engage in dangerous behaviors, leading to serious injuries.
-
AI cannot be used to discriminate against a particular social group. For example, if an autonomous vehicle uses AI, developers should ensure that the vehicle can detect pedestrians of all skin colours and avoid accidents.
-
You cannot use AI to assign an individual a “social score.” This practice used by the Chinese government ranks citizens on a scale for favorable or inadequate treatment.
-
AI cannot be used to identify based on biometric identifiers. Although biometric systems can be used legally (for example, identifying office workers in buildings), such systems cannot be used to discriminate against social groups based on their physical characteristics.
-
AI cannot be used to create a database of individuals who are most likely to commit a crime. This clause covers appearance-based discrimination and addresses privacy regarding CCTV (closed television) footage. Real-time information collection is limited based on circumstances and needs. AI can be used to identify people who have already committed crimes.
“High-risk” AI systems are subject to intense scrutiny, but are not banned entirely. These systems include critical safety infrastructures such as traffic light control and medical devices. Biometric authentication (forms that fall under the “unacceptable risk” category are prohibited). Employment-related AI can lead to discrimination among applicants when used in employment practices. Companies operating high-risk systems must submit documents indicating that the system is not violating the law. This kind of transparency is important to receive government approval for high-risk systems.
AI systems that are “limited risk” have the potential to manipulate consumers. This poses a risk of transparency, but in a much lower range than high-risk or unacceptable risk systems. This level is particularly relevant to the generation AI systems and chatbots. They may be sometimes improperly designed, but such systems are unlikely to cause significant harm to users, according to AI laws. This provision also supports deepfake or synthetically generated AI media. Companies need to disclose such content when distributing it, as it can be difficult to distinguish it from actual images and videos.
The final category, “minimum risk,” addresses systems that do not inherently violate consumer rights, and are generally expected to follow the principles of non-discrimination. The AI Act also states that high-tech companies need to notify individuals when they use work to train a company's generative AI model. If a company violates any of the principles discussed, the penalty is the maximum fine of 35 million euros or 7% of the company's global sales.
Big Technology Pushback
Many large tech companies, including Meta and Openai, call these regulations “funny.” In particular, it is a requirement to notify people when their work is being used in training data, and states that regulations lead to slower innovation. Openai CEO and co-founder Sam Altman called for Europe to embrace AI as the future during a panel discussion at the Institute of Technology in Berlin.[s] To enable products to be deployed to Europe as quickly as in other parts of the world. “The statement hinted at the AI rollout impatience due to EU restrictions.
Meta also took an offensive stance. In February 2025, company lobbyist Joel Kaplan equated the EU technical fines with tariffs, reflecting a similar statement by meta CEO Mark Zuckerberg. Kaplan stressed the importance of innovation, saying that if AI is monitored at the rigour level where AI is proposed, the EU will be delayed if it falls behind. Many technology companies have been particularly keen on demand for the EU to ease restrictions after Donald Trump, who became US President in January 2025, argued that the law was curbing innovation.
Tara Ramanathan