- The European Parliament’s parliamentary committee approved the EU’s AI law on Thursday, bringing it closer to legislation.
- The regulation regulates artificial intelligence with a risk-based approach.
- The AI Act specifies requirements for developers of “underlying models” such as ChatGPT, including provisions to ensure that training data does not violate copyright laws.
Private companies have been tasked with developing AI technology at breakneck speed, giving rise to systems like Microsoft-backed OpenAI’s ChatGPT and Google’s Bard.
Lionel Bonaventure | AFP | Getty Images
The European Parliament’s main parliamentary committee has approved the first-of-its-kind artificial intelligence regulation and is close to enacting legislation.
The approval marks a milestone in the race among authorities to keep up with AI, which is evolving at breakneck speed. This law, known as the European AI Law, is the first law on AI systems in the West. China has already drafted rules aimed at governing how companies develop generative AI products like ChatGPT.
The law takes a risk-based approach to regulating AI, with obligations to systems proportional to the level of risk they pose.
The rule also sets out requirements for so-called “foundation model” providers such as ChatGPT, but given the increasing sophistication of providers and fears that even skilled workers will lose their jobs, regulators may find this model is a major concern.
The AI Act classifies applications of AI into four risk levels: unacceptable risk, high risk, limited risk, and minimal or zero risk.
Applications with unacceptable risk are prohibited by default and cannot be deployed inside the block.
They include:
- AI systems that use subliminal or manipulative or deceptive techniques to distort behavior
- AI systems exploiting vulnerabilities of individuals or specific groups
- Biometric classification system based on sensitive attributes or traits
- AI systems used for social scoring and credibility assessment
- AI systems used for risk assessment to predict crime and administrative crime
- An AI system that creates or augments facial recognition databases through untargeted scraping
- AI systems that infer emotions in law enforcement, border control, the workplace, and education
Several lawmakers called for the measures to be more expensive to ensure ChatGPT coverage.
This places requirements on “foundation models” such as large language models and generative AI.
Developers of underlying models should apply safety checks, data governance measures, and risk mitigation before publishing their models.
You are also required to ensure that the training data used to feed the system does not violate copyright laws.
“Providers of such AI models will need to assess and take steps to mitigate risks to fundamental rights, health and safety, the environment, democracy and the rule of law,” Linklaters said. said Seyhun Periban, a lawyer and co-head of the law firm. Madrid’s Telecommunications, Media, Technology and IP Practice Group told CNBC.
“Data governance requirements will also apply, such as examining data sources for suitability and possible bias.”
It is important to emphasize that although the law was passed by members of the European Parliament, it is far from becoming law.
Private companies have been tasked with developing AI technology at breakneck speed, giving rise to systems like Microsoft-backed OpenAI’s ChatGPT and Google’s Bard.
Google announced Wednesday a number of new AI updates, including an advanced language model called PaLM 2. The company says PaLM 2 outperforms other leading systems for some tasks.
New AI chatbots like ChatGPT leverage large-scale language models trained on massive amounts of data to generate human-like responses to user prompts, which many engineers and technologists rely on. fascinated scholars.
But AI technology has been around for years and is integrated into more applications and systems than you might think. For example, it determines what viral videos and food photos appear on your TikTok and Instagram feeds.
The EU proposal aims to provide some rules for AI companies and organizations that use AI.
The rule has raised concerns in the tech industry.
The Computer and Communications Industry Association said it was concerned that the AI law was overly broad and that harmless forms of AI could be caught.
CCIA Europe Policy Manager Bonifas de Champry said: “A broad category of useful AI applications with very limited or no risk will face stringent requirements in the coming years, or I’m afraid it might even get banned.” told CNBC in an email.
“The European Commission’s original proposal for AI law takes a risk-based approach, regulating certain AI systems that pose obvious risks,” added de Champry.
“Members of the European Parliament are now proposing all sorts of amendments to change the nature of AI law, and it is now assumed that a very broad category of AI is inherently dangerous.”
Dessi Sabova, head of continental Europe for the technology group at law firm Clifford Chance, said the EU’s rules would set a “global standard” for AI regulation. However, other jurisdictions, including China, the US and the UK, are quickly formulating their own responses, he added.
“The breadth of the proposed AI rules inherently means that AI players around the world need to be mindful,” Savova told CNBC in an email.
“The right question is whether the AI Law will set the sole standard for AI. China, the US and the UK, to name a few, have defined their own AI policies and regulatory approaches. We will closely monitor the AI law negotiations in the United States,” adjusting its own approach. “
Savova added that the latest draft AI legislation from Congress will codify many of the ethical AI principles that many organizations have promoted.
Sarah Chandar, senior policy adviser at European Digital Rights, a Brussels-based digital rights movement, said the law would impose “testing, documentation and transparency requirements” on underlying models like ChatGPT. said he would be charged.
Chandar told CNBC, “While these transparency requirements do not eliminate the infrastructure and economic concerns associated with developing such large-scale AI systems, technology companies have the computing power needed to develop them. required to disclose the amount of
“There are currently several efforts to regulate generative AI around the world, including in China and the United States,” said Pelivan.
“However, the EU’s AI law will play a pivotal role in the development of such legislative initiatives around the world, as has happened with general data, leading the EU to once again become a standard-setter on the international stage. Likely” protective rule. “
