Hamburg Labor Court Ruling on AI in the German Workplace

Applications of AI


A recent decision by the Hamburg Labor Court on a German trade union's attempt to enforce a ban on the use of AI in the workplace makes it clear once again that employers cannot allow the use of AI to go unchecked.

Given the impending implementation of the AI ​​Act, employers are well advised to take a moment to review their current IT landscape. The EU Parliament voted on the law on March 13th. Final checks are now being carried out by lawyers and linguists, and the document is expected to be finally adopted before the adjournment of parliament. The law also needs to be formally approved by the council. This law will come into force 20 days after publication in the Official Gazette. The rules will be implemented in stages (systems with unacceptable risk must be shut down within six months, i.e., by December of this year), but all rules will be in place after two years. become.

The idea behind this law is to regulate the use and development of artificial intelligence based on a risk-based approach. This means that the law applies to “providers,” “deployers,” “importers,” “sellers,” and “authorized representatives,” essentially not just software developers. It also includes users (and therefore employers and employees) and those involved in software development. distribution chain. Companies worldwide may be affected if they provide or operate an AI system or if the output generated by AI is used within his/her EU (including in relation to employees within the EU). You will receive

Will it be within range?

There's a good chance this applies to your company, especially since AI systems are broadly defined as “machine-based systems that are designed to operate with varying levels of autonomy and that can, for explicit or implicit purposes, generate outputs such as predictions, recommendations or decisions that affect the physical or virtual environment.” If you use IT solutions that incorporate AI products and use the results generated by AI, you're covered. For employers, this includes any AI systems used for recruiting, selection, performance management and reward, for example.

To comply with the requirements set out in the Act, AI tools and products must be evaluated to determine the level of risk they pose to individual rights, especially with regard to discrimination, data and privacy protection. While many AI systems probably do not pose unnecessary risks, the use of such systems to manage individuals is considered to be near the higher end of the risk scale, not unacceptable but certainly high. In any case, they must be evaluated. The Act requires companies to disclose that content is generated by AI even in low-risk systems such as video games, recommendation systems and spam filters.

Given the broad definition of an AI system and the broad scope of the AI ​​Act, even using a chatbot to address general questions from candidates or employees may fall under the scope of the AI ​​Act. You may still think that this does not concern you, but keep in mind that the AI ​​Act provides for high fines (i.e. up to 35 million euros or up to 7% of global turnover) that will also apply. Small and medium-sized businesses will be subject to fines (although reduced fines).

Evaluate the AI ​​you are already using

It is important to take a closer look at the systems currently in use to determine if you are at risk. Employers are generally so familiar with their systems that they may not even realize that they already have her AI in them. Once you have inventoried your systems and identified potential gaps (which you should do anyway to comply with GDPR and other legal and regulatory requirements for trade secret protection), first assess the scope of your exposure. You can get ideas. .

Next, take a preliminary step to assess which risk category applies to your system. The AI ​​Law provides for a risk classification based on the intended purpose and functionality of the AI ​​product. The current draft annex includes a list of cases that are considered high-risk AI systems, including areas such as education, medical technology, critical infrastructure, law enforcement, border control, administration of justice and democracy, and especially employment. If you determine that your system falls into the high-risk category, you should consider the stricter obligations and requirements that come with this (e.g. conformity, assessment, registration in public or private databases, and in some cases, fundamental rights impact assessment). You should also address the potential ethical and reputational risks that this system poses.

Regulating wise handling

Aiming for only minimal compliance with AI laws is not recommended. It is possible to deviate from the standard and fall below the standard. Especially in the potentially sensitive world of employment relationships, it is far better to proactively put measures in place to protect an organization's brand from loss of trust due to strictly legal but unethical AI practices. Excellent.

Now is a good time to implement detailed AI policies, related technologies and measures around the use of AI in your capacity as an employer.

  • You need to consider who will own the deployment within your company and oversee the AI, and who on your board will ultimately be responsible.
  • In international businesses, the scope of some AI tools used in HR-related contexts may cross borders, so risk assessment of systems may require a global approach.
  • Additionally, what existing resources are available to enable efficient implementation of an AI compliance program? Should new roles be created or existing roles adapted? there is.
  • Consider who you should consult within your company (IT, security, DPO, legal, works council, ethics and compliance, HR, etc.).
  • You should assess whether existing internal policies and procedures need to be customized to meet new AI requirements.

When drafting AI policy, think in detail, not in broad strokes. Consider how people, processes, and technology fit together. example:

  • Who can use AI: Allow use in all departments?
  • Input data: What types of requests should be allowed: text, images, drawings, photos, video, audio, code? If code is allowed, should specific license conditions be prerequisite? ?
  • Input Rights: Who should have rights to the input data, the company or a third party?
  • Confidentiality: Should certain input data be prohibited from use because it contains protected trade secrets or is considered sensitive or confidential?
  • Data protection: If input data contains personal data, how can employees check whether there is a legal basis for holding/processing the data?
  • Outputs: For what employment purposes will you allow the outputs to be used?
  • Checking: How should the factual accuracy, quality, and legality of the output data be checked?
  • Approval: Who needs to approve the use of each HR tool and who needs to approve the use of the output in each case?

All of this makes it clear that simply purchasing an external AI HR management system is not enough to ensure a company's compliance with either the AI ​​Act or various other areas of law, procedures, reliable practices, or customer satisfaction. You may have already guessed it. . There are no “plug and play” tools that free companies from all the internal work required to achieve compliance. However, by starting early to assess the intended use of AI systems in HR, you can minimize the compliance burden when the time comes.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *