Who is responsible when interactions with AI cause harm? – Drexel News Blog

AI News


Credit: Just_Super

Artificial intelligence has become part of our daily lives, with millions of people using this technology for everything from creating grocery lists to requesting medical advice and treatment. It's what people rely on to help them make decisions, solve problems, and learn. However, it has become clear that this technology is far from perfect. And as people gain more trust in these tools, new questions are arising about who is responsible when the tools fail or when their use results in harmful or disastrous situations.

Lawsuits are beginning to shed more light on the legal challenges posed by AI implementation. With little regulation of the technology or the companies that use it, experts suggest courts could be the first line to answer liability questions.

Anat Lior, JSD, assistant professor at Drexel's Klein School of Law, is an expert in AI governance and liability, intellectual property law, insurance, and emerging technology law related to AI. To shed light on the legal issues surrounding this new technology, Lior shared his insights with the following organizations: drexel news blog.

Who is now responsible if an artificial intelligence program causes damage?

Because most current AI-related tort disputes are settled before reaching judicial decisions, there is still no clear consensus on which liability framework should apply or who should ultimately be held responsible when AI causes harm. What is clear is that AI technology itself cannot be held responsible. Responsibility must be placed on the human or other legal entity behind it, and responsibility serves as a tool to shape human behavior and reduce risk. There is always a human in the background who is motivated through responsibility to mitigate potential harm.

Different scholars approach this issue in completely different ways. Some advocate a strict liability model that holds manufacturers and adopters of AI responsible regardless of the level of care taken.

Others prefer a fault-based framework in which AI developers, adopters, and users are only liable if they act unreasonably in the circumstances, i.e., if they fall short of the applicable standard of care.

Still others see AI as just a product on the market and opt for product liability regimes. Under strict liability, accountability is broader and companies can be forced to release only the most secure versions of their systems. In contrast, liability under a negligence regime is more limited and may protect companies that have acted as prudent entities, making it appealing to scholars concerned that strict liability may hinder innovation.

Additional proposals include a statutory safe harbor system that would exempt companies from liability if they follow specified guidelines.

How does the nature of AI as a “black box” technology impact the current tort law system in terms of assigning liability?

The unique characteristics of AI are putting pressure on long-standing tort concepts such as foreseeability, rationality, and causation. The lack of explainability in many AI systems can make it difficult to establish a clear causal link between the system's behavior and the resulting damage, making negligence claims particularly difficult, especially when assessing whether the damage was truly foreseeable.

Still, tort law has repeatedly shown its ability to evolve with new technologies, and it is likely to evolve again in the context of AI.

How is AI regulated?

In the absence of federal regulation, many U.S. states are developing or have already enacted their own AI laws to address potential harms associated with the technology.

Colorado and California provide two major examples that follow different paths. Colorado has adopted a comprehensive, consumer-focused framework aimed at preventing discriminatory outcomes, while California is pursuing a more targeted set of bills that address issues such as transparency, deepfakes, and employment-related discrimination. While nearly every state has engaged in some level of debate regarding AI regulation, reaching agreement on the appropriate scope and structure of such laws remains difficult.

Some states want to give technology room to grow and innovate without being constrained by strict regulations. They believe that the significant benefits of AI outweigh the potential risks. Some believe that existing legal frameworks may already be sufficient to address the harms associated with AI. In any case, the law often lags behind emerging technologies. In the meantime, more flexible regulatory tools, such as liability insurance and industry standards, can help bridge the gap until broader agreement is reached on the appropriate regulatory approach.

What have we learned from AI copyright litigation?

Copyright law is at the center of one of the major legal debates surrounding AI. A number of ongoing lawsuits against companies that train and deploy generative AI systems, such as Gemini and ChatGPT, are testing the limits of the current copyright framework. Although it is still too early to draw firm conclusions, core principles such as fair use, direct and indirect infringement, and copyright are all being reconsidered and reframed as AI increasingly influences creative practices once understood to be exclusively human.

Reporters interested in speaking with Lior should contact Mike Tuberosa, assistant director of News and Media Relations, at mt85@drexel.edu or 215.895.2705.





Source link