Study: Humans and AI share responsibility for AI-induced harm

AI News


Artificial intelligence (AI) is becoming an integral part of our daily lives, and with it comes the pressing question of who is responsible when AI goes wrong. Since AI lacks consciousness and free will, it is difficult to blame the system for mistakes. AI systems operate semi-autonomously through complex and opaque processes. Therefore, even if the system is developed and used by human stakeholders, it is impossible to predict harm. Traditional ethical frameworks therefore fail to account for responsibility for these harms, leading to the so-called liability gap in AI ethics.

Recent research by Hyung-Gure Noh, Ph.D., assistant professor of philosophy at Busan National University in South Korea, sheds light on philosophical and empirical questions surrounding moral responsibility in the context of AI systems. This research criticizes traditional moral frameworks that center on human psychological capacities such as intention and free will, making it virtually impossible to attribute responsibility to either AI systems or human stakeholders. The research results were published in Topoi on November 6, 2025.

“As AI technologies become more deeply integrated into our lives, instances of AI-mediated harm will certainly increase. Therefore, it is important to understand who is morally responsible for unintended harm caused by AI,” says Dr. Noh.

Under traditional ethical frameworks, AI systems cannot be blamed for harm. These frameworks typically require that agents have certain mental capacities to be morally responsible. In the case of AI, there is a lack of conscious understanding, that is, the ability to understand the moral significance of the AI’s actions. Additionally, AI systems do not undergo subjective experience, leading to a lack of phenomenal consciousness. The system is not given complete control over its actions and decisions. They also lack intention, the ability to make deliberate decisions that form the basis of behavior. Finally, these systems often lack the ability to provide responses or explanations about their behavior. Given these gaps, it is wrong to place the blame on the system.

The study also sheds light on Luciano Floridi’s non-anthropocentric theory of agency and responsibility in the field of AI, which is also supported by other researchers in the field. This theory replaces traditional ethical frameworks with the idea of ​​censorship, according to which human stakeholders have a duty to prevent AI from causing harm by monitoring and modifying systems, and disconnecting or removing them if necessary. The same obligations apply to AI systems themselves, provided they have a sufficient level of autonomy.

“Rather than insisting on traditional ethical frameworks in the context of AI, it is important to acknowledge the idea of distributed responsibility. It implies a common obligation for both human stakeholders, including developers, and the AI agents themselves, to ensure that errors are corrected quickly and to prevent their recurrence, thereby strengthening ethical practices in both the design and use of AI systems. Noh.

/Open to the public. This material from the original organization/author may be of a contemporary nature and has been edited for clarity, style, and length. Mirage.News does not take any institutional position or position, and all views, positions, and conclusions expressed herein are those of the authors alone. Read the full text here.



Source link