Innovative businesses, especially those involving artificial intelligence (AI), come with challenges and legal risks. As AI applications continue to advance and integrate into different business models and industries, companies may face more exposure to lawsuits.
Businesses and organizations can protect and mitigate such risks by (i) ensuring that AI applications are programmed correctly; (ii) maintain documentation that the AI Input is correct, adequate and uncorrupted; (iii) adequately supervise AI applications and their outputs; (iv) establish guardrails against misuse of AI applications by users;
Claims of Potential Claimants
Claims by plaintiffs regarding AI applications are expected to fall into three categories:
-
Claiming Inaccurate Information Created by generative AI such as chatbots, image and audio generators
-
Allegations of interference with other IT systems Downtime or financial loss (e.g. AI applications allegedly making unfavorable investment decisions)
-
Real-world accidents allegedly caused by AI applications, That is, by autonomous vehicles, robots, or AI-controlled industrial facilities.
To whom are such allegations made?
The parties to which each claim may be made include, of course, the developer of the AI system, its operator (if different from the developer), and its user. In many cases, there may also be distributors involved who may face respective allegations. Finally, many of the above will be covered by insurance, and professional billing companies usually aim for that as well. It has also been proposed to create a separate legal entity for AI systems as an ‘e-person’.1 – This can also target AI systems themselves. However, the e-person concept has few advocates so far, so lawsuits against AI applications themselves are unlikely in the near future.
What legal bases may apply in civil disputes over AI?
From a legal perspective, potential claimants may look to various areas of law to support their claims regarding the use of AI.
-
Contract basis: Claims may be based on contract law, as there is usually a contractual relationship between developers, operators and users of AI systems. Statutory laws (such as those relating to ancillary obligations), particularly in civil law jurisdictions, may give rise to obligations that go beyond the terms of the executed contract or terms of use. However, this is also the area where contracting parties can protect their own interests in the simplest and most subtle ways by using judicious contractual clauses.
-
Fundamentals of product liability: Users and third parties may seek to bring claims under product liability law. As a prerequisite, AI systems are considered “products” and may contain “flaws”. For example, in the EU, the draft Product Liability Directive explicitly includes software as a product. However, there is a high possibility that mere information will not be recognized as a product. Furthermore, a presumed “flaw” in an AI system does not constitute a “flaw in product characteristics” as defined in the Product Liability Act. Product liability laws generally do not generally require products to be ‘perfect’, but rather to comply with ‘legitimate expectations of safety’.2
-
Tort Law Basis: Most legal systems provide for damages claims based on negligence (i.e. tort law), and claimants may seek to apply this to AI systems as well. However, there are also proposals for specific liability regimes for AI systems.3 Further, the plaintiffs argue that the alleged problem with respect to AI systems is governed by certain rules, namely rules governing the liability of persons who carry out dangerous activities (Article 250 of the Italian Civil Code, Article 1242 of the French Civil Code, etc.). You may claim that you do. Parents/guardians of minors/disabled persons (Articles 2047 and 2058 of the Italian Civil Code), liability of animal owners (Article 833 of the German Civil Code), liability of vehicle owners (Article 7 of the German Road Traffic Act) ) or the operation of hazardous installations (Section 25 ff. German Atomic Energy Law). However, these approaches should be criticized on the grounds that the analogy fails to take into account the fact that the above rules are clear exceptions to the general rule of civil liability and are not suitable for analogous application.Four
-
Regulatory basis:Five Claims may also be made under certain regulations, such as data protection laws and intellectual property (IP) laws. For example, when personal data is processed by AI systems, data protection regulations (such as GDPR in Europe) impose requirements that must be adhered to. According to Art. Under GDPR 82, a person who suffers financial or non-monetary damages as a result of a breach of data protection requirements may make a non-contractual claim for such damages. Additionally, regulations such as GDPR stipulate fines for non-compliance. Furthermore, in relation to AI, certain liability regimes such as anti-discrimination legislation (e.g. German anti-discrimination law “AGG”) and professional liability legislation (e.g. D&O liability as well as physician or lawyer liability) Claims may also be brought under application.
-
insurance:6 Increased use of AI systems opens the door to insurance for AI products. Just as many countries require car owners to have insurance, there are calls for manufacturers and professional operators of AI systems to also require professional liability insurance. Such insurance goes beyond the typical coverage of cyber insurance, which generally does not cover bodily harm, brand damage, or property damage. As insurers often impose requirements, AI-specific insurance, in addition to reducing costs, may further facilitate the development of best practices for companies using AI.
open up legal questions
A civil damages claim typically requires (1) some breach of law or contract, (2) an element of negligence, and (3) a causal relationship between the breach and the damages. The implementation and use of AI systems raises several unresolved legal issues for potential claimants in this regard.
-
burden of proof:7 One of the main challenges to claims of damages caused by AI applications is the burden of proof. Victims generally bear the burden of proof. But regulators and legal commentators say victims often lack the necessary insight and information about her AI system. Against this background, the EU, for example, has made it easier for victims to prove that AI systems have violated the law by providing for rights to information and establishing refutable presumptions about liability for damages. We are working on an AI Responsibility Directive aimed at Causality between failure, AI system functionality, and damage. As with regulatory rights to information, some courts may shift some of the burden of proof to parties to the dispute who possess additional information, such as the developers of their respective AI applications.
-
Attribution of negligence:8 Where claims are asserted in connection with AI systems, the lack of autonomous decision-making, lack of knowledge of potentially responsible parties, and lack of subjective fault on the part of the AI application itself makes negligence difficult to identify. It is not always possible to attribute to entities. . The fault itself usually involves negligent or deliberate behavior, but this concept is not applicable to algorithm-based AI applications. The solution is to give the AI system a legal persona (“e-person”) to increase the possibility of direct liability, or to attribute “flaws” in the AI system to either operators, developers, or users. Some people suggest that AI system.
-
Standard treatment:9 Finally, there is a lively legal debate about the standard of care to be applied when operating AI systems. In general, different standards of care are suggested, depending on the risk affinity and capabilities of a particular AI system (such as whether it will be used for private or business purposes). It is also debated whether the standard should be a fictional human standard (“human-machine comparison”) or a “machine-specific” treatment standard. In addition, some argue that AI system developers need to update their products according to the current state of science and technology, resulting in a relatively high standard of care.
Key considerations
When defending claims arising from the use of AI applications, companies should consider the following:
-
First, you can show that the input (that is, the training material) is correct, suitable for its intended purpose, and uncorrupted. This allows AI applications to apply the correct principles to new inputs. Otherwise, the situation can be compared to a math student who has not been trained in multiplication, only addition and subtraction, and is asked to solve a problem that requires multiplication.
-
Next, you need to program your AI application correctly. Of course, this applies to the “core” of AI systems, but it also has a lot to do with interfaces between different AI systems. Examples include natural language processing AI applications like chatbots and AI applications that solve real-world problems based on given input data. If the interface is implemented correctly, the problem-solving AI application can “understand” the question (as if speaking the same language). Otherwise, the problem-solving AI application may not understand the question or may misinterpret it and return incorrect results (such as speaking another language).
-
Third, AI applications need to be well monitored. Therefore, if the correct input is inserted and the programming is correct, the AI application should be properly monitored so that there is no basis for the claim that the system is drawing incorrect or biased conclusions from statistical data.
-
Fourth, it is also important that users of AI systems are following their instructions in an ethical manner. It’s hard to predict every possible way a user could abuse her AI system. One example is when a driver taps a can of beer to the steering wheel to trick a car with AI features (such as a steering assistant to keep you in the lane) into fully autonomous driving (this Due to the AI application, the driver’s hands are on the steering wheel), which is a clear violation of the car’s instructions. Therefore, it is important to establish guardrails against misuse of AI applications by users.
Conclusion
Disputes over AI systems can arise on the basis of a variety of legal concepts, and like most things in life, all aspects of AI systems – their development, operation and use – are subject to legal claims. may become. To avoid conflict and be well prepared when conflict arises, organizations should:
-
be informed: It is important to understand and always be aware that the development, operation and use of AI applications does not take place in a territory without clear legal boundaries. In addition to regulatory law, you should bear in mind the civil obligations specifically discussed in this article.
-
Consider every scenario: Developers and operators of AI systems must contractually bind their customers to specific terms of use and clearly explain these rules to maximize safety measures. Exclusions or limitations of liability can be a further factor in avoiding and mitigating risks in contractual relationships.Ten
-
Risk mitigation starts at the beginning: When developing and training AI systems, adequate testing and selection of training materials are not only critical to the success of AI applications, but also key to risk mitigation.
-
Plan ahead: If possible, you should be able to log the work and output of your AI system so that you can defend yourself based on these log files in the event of a dispute.
-
Stay vigilant: In any case, all individuals and organizations involved should always carefully monitor and evaluate the performance of AI systems. Keep in mind that by definition it is impossible to monitor all parts of a process. Monitoring is often limited to monitoring the output of an AI system.
[View source.]
