Abstract

The notion of “responsibility gap” with artificial intelligence (AI) was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems – gaps in culpability, moral and public accountability, active responsibility—caused by different sources, some technical, other organisational, legal, ethical, and societal. Responsibility gaps may also happen with non-learning systems. The paper clarifies which aspect of AI may cause which gap in which form of responsibility, and why each of these gaps matter. It proposes a critical review of partial and non-satisfactory attempts to address the responsibility gap: those which present it as a new and intractable problem (“fatalism”), those which dismiss it as a false problem (“deflationism”), and those which reduce it to only one of its dimensions or sources and/or present it as a problem that can be solved by simply introducing new technical and/or legal tools (“solutionism”). The paper also outlines a more comprehensive approach to address the responsibility gaps with AI in their entirety, based on the idea of designing socio-technical systems for “meaningful human control", that is systems aligned with the relevant human reasons and capacities.

Highlights

  • In 2004, Andreas Matthias introduced what he called the problem of “responsibility gap” with “learning automata” (Matthias, 2004)

  • Lawyers and policy-makers proposing the revision of current legal liability regimes may either underestimate the importance of maintaining some form of human moral responsibility on the behaviour of the artificial intelligence or recognise this need but without saying how moral and social practices – and legal rules – should change in order to govern a responsible transition to the use of AI

  • To improve the understanding of the problem of the “responsibility gap” for artificial intelligence (AI), we have proposed to rely on a comprehensive analysis of four forms of responsibility presented in some relevant philosophical and legal literature

Read more

Summary

Introduction

In 2004, Andreas Matthias introduced what he called the problem of “responsibility gap” with “learning automata” (Matthias, 2004). Lawyers and policy-makers proposing the revision of current legal liability regimes (including extension of strict and product liability regimes, and “electronic personhood”) may either underestimate the importance of maintaining some form of human moral responsibility on the behaviour of the artificial intelligence or recognise this need but without saying how moral and social practices – and legal rules – should change in order to govern a responsible transition to the use of AI. We call this the risk of “legal solutionism”.

Varieties of Responsibility Gaps
Culpability Gaps
Moral Accountability Gaps
Public Accountability Gaps
Active Responsibility Gap
Fatalism and Deflationism
The Risks of Solutionism
Explainable AI and “Technical Solutionism”
New Liability Regimes and the Risks of “Legal Solutionism”
Meaningful Human Control: the Concept
The “Tracking” Condition and its Payoffs for Responsibility
The Tracing Conditions and its Payoffs for Responsibility
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.