What does EU code mean for businesses for general purposes?

AI For Business


“There will definitely be action in the coming months!” warns Forester Analysts.

The European Union's General Purpose Artificial Intelligence (GPAI) Code of Practice is set to be effective starting tomorrow (August 2).

This voluntary tool is designed to help the industry comply with AI law obligations in terms of models with a wide range of capabilities that allow them to complete a variety of tasks, allowing them to be implemented in different systems or in different applications. Examples include commonly used AI models such as Chat GPT, Gemini, and Claude.

This code publishes copyright and transparency rules, with certain advanced models deeming “systematic risks” facing additional voluntary obligations surrounding safety and security.

Signers are committed to respecting restrictions on access to data to train models, such as subscription models and models imposed by paywalls. We also undertake to implement technical protection measures that prevent the model from generating output that reproduces content protected by EU law.

Signatories, including Anthropic, Openai, Google, Amazon, IBM, and others, must create and implement copyright policies that comply with EU law. Xai, owned by Elon Musk, also signed the GPAI code, but only in the sections that apply safely.

GPAI code asks signers to continuously assess and mitigate systematic risks associated with AI models, and provides appropriate risk management measurements throughout the model's lifecycle. They are also being asked to report serious incidents to the EU.

Additionally, companies will need to publish information about the new AI model at launch. Additionally, this information and additional data should be made available to the EU AI office and associated national authorities upon request.

“While providers of the Generator AI (Genai) model are directly responsible for directly meeting these new rules, it is noteworthy that companies using the Genai model and systems (companies purchasing directly from the Genai provider) feel that these requirements will affect the value chain and third-party risk management practices.

Despite the regulations expanding accountability and enforcement of the general-purpose AI model, many copyright holders in the region have expressed dissatisfaction.

In a statement, 40 signatories, including news publications, artist groups, translators, television and film producers, said the GPAI code “does not fulfill the promises of the EU AI Act itself.”

On behalf of the Union, the European Council of Writers said that the code “missed the opportunity to provide meaningful protection of intellectual property” when it comes to AI.

“We strongly reject the claim that the code of practice is a fair and viable balance. This is not simply a truth, but a betrayal of the purpose of EU AI law.”

However, EU AI regulations are perhaps the most robust place in the world, and are set up to form risk management and governance practices for most global companies.

“The requirements may not be perfect, but they are the only binding ruleset for AI with a global reach, representing the only realistic option of trustworthy AI and responsible innovation,” Iannopollo said.

The AI Act came into effect last August, and the area enforced its first set of obligations regarding practices that were banned six months later in February. Also, aside from the GPAI code, tomorrow marks the deadline for EU member states to designate “national authority” to oversee the application of the law and carry out market surveillance activities.

Penalties for violations under this law are high, reaching up to 7% of the company's global sales. This means that businesses need to be careful. “Companies, undoubtedly, action will occur in the coming months!” warning a key analyst.

The GPAI code “sets a clear precedent and drips downstream. Companies need to be prepared to demonstrate that they are using AI in line with responsible practices.

“This is the first true test of AI supply chain transparency. If an organization's data is not ready for AI if it is unable to show where the data came from and how the model inferred.”

Don't miss out on the knowledge you need to succeed. Sign up for Daily BriefsA digest of Sci-Tech news that requires knowledge of Silicon Republic.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *