“There will definitely be action in the coming months,” warns Forester Analyst Enza Iannopolo.
Tomorrow (August 2), the general purpose rules for the European Union's AI law will come into effect. To help industries comply with new regulations, the EU has developed a general artificial intelligence (GPAI) code of practice.
This voluntary tool is designed to help the industry comply with AI law obligations in terms of models with a wide range of capabilities that allow them to complete a variety of tasks, allowing them to be implemented in different systems or in different applications. Examples include commonly used AI models such as ChatGpt, Gemini, and Claude.
This code publishes copyright and transparency rules, with certain advanced models deeming “body risk” to face additional voluntary obligations surrounding safety and security.
Signers are committed to respecting restrictions on access to data to train models, such as subscription models and models imposed by paywalls. We also undertake to implement technical protection measures that prevent the model from generating output that reproduces content protected by EU law.
Signatories, including humanity, Openai, Google, Amazon, IBM, and others, must also create and implement copyright policies that comply with EU law. Xai, owned by Elon Musk, has also signed the GPAI code, but only in the sections that apply to safety and security.
The GPAI code asks signers to continuously assess and mitigate systematic risks associated with AI models and provide appropriate risk management measurements throughout the model's lifecycle. They are also being asked to report serious incidents to the EU.
Additionally, companies will need to publish information about the new AI models at launch and to the EU AI office, relevant national authorities, and those who will integrate the models into the system upon request.
“While providers of the Generator AI (Genai) model are directly responsible for directly meeting these new rules, it is noteworthy that companies using the Genai model and systems (companies purchasing directly from the Genai provider) feel that these requirements will affect the value chain and third-party risk management practices.
Despite the regulations expanding accountability and enforcement of the general-purpose AI model, many copyright holders in the region have expressed dissatisfaction.
In a statement, 40 signatories, including news publications, artist groups, translators, television and film producers, said the GPAI code “does not provide the promise of the EU AI law itself.”
On behalf of the Union, the European Council of Writers said that the code “missed the opportunity to provide meaningful protection of intellectual property” when it comes to AI.
“We strongly reject the claim that the code of practice is a fair and viable balance. This is not simply a truth, but a betrayal of the purpose of EU AI law.”
However, many consider the EU's AI regulations to be perhaps the most robust place in the world, and are set to shape the risk management and governance practices of most global companies.
“The requirements may not be perfect, but they are the only binding ruleset for AI with a global reach, representing the only realistic option of trustworthy AI and responsible innovation,” Iannopollo said.
The AI Act came into effect last August, and the area enforced its first set of obligations regarding practices that were banned six months later in February. Also, aside from the GPAI code, tomorrow marks the deadline for EU member states to designate “national authority” to oversee the application of the law and carry out market surveillance activities.
Penalties for violations under this law are high, reaching up to 7% of the company's global sales. This means that businesses need to be careful. “Companies, definitely, there will be actions in the coming months,” Iannopollo warned.
“August 2 of the EU AI Act sets a clear precedent and drips downstream. Companies must be prepared to demonstrate that they are using AI in line with responsible practices.
“This is the first true test of AI supply chain transparency. If an organization's data is not ready for AI if it is unable to show where the data came from and how the model inferred.”
Don't miss out on the knowledge you need to succeed. Sign up for Daily BriefsA digest of Sci-Tech news that requires knowledge of Silicon Republic.
