Abstract

A discussion concerning whether to conceive Artificial Intelligence (AI) systems as responsible moral entities, also known as “artificial moral agents” (AMAs), has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With this perspective in mind, we focus on the use of AI-based diagnostic systems and shed light on the complex networks of persons, organizations and artifacts that come to be when AI systems are designed, developed, and used in medicine. We then discuss relational criteria of judgment in support of the attribution of responsibility to humans when adverse events are caused or induced by errors in AI systems.

Highlights

  • AI and ResponsibilityWhen humans and Artificial Intelligence (AI) systems interact, there is an issue at stake about who or what can be held “responsible” — and possibly “liable” — for adverse events that may derive from this interaction

  • Moral responsibility is here construed in the broad sense in which it may refer to several aspects of human agency, e.g., causality, accountability, liability, reactive attitudes such as praise and blame, and duties associated with social roles

  • Whatever the approach that is adopted to ascribe responsibility in the medical field with AI systems, be it meaningful human control, be it the rights to information and explanation, whether one requires ex ante supervision and controls or ex post countermeasures, it is clear that the entities that must be taken into consideration are much more numerous than a single AI system in use in a hospital and the human being who has used it for a particular medical case

Read more

Summary

11 Page 2 of 28

ML systems are called “black boxes”: humans can only see what goes in (the input data) and what comes out (the classification in output), but not what happens in-between This paradigm shift in AI comes with a drastic decrease in the direct involvement of programmers in the creation of computational systems that may be used to automatise what has traditionally been performed by humans. The risks of harmful outcomes are increased: in addition to the possibly illencoded knowledge of GOFAI, the rather mysterious knowledge created in a black box of ML needs to be harnessed This task has significant implications on responsibility in AI, which this work is aimed at analyzing

11 Page 4 of 28
Intentional Agency and Compatibilist Accounts of Human Moral
11 Page 6 of 28
Autonomy and Sentience in Human Moral Agency
11 Page 8 of 28
Towards a Preventive System
Meaningful Human Control and Due Diligence
11 Page 10 of 28
Case Study
11 Page 12 of 28
11 Page 14 of 28
Doctors’ Responsibility and the Principle of Confidence
11 Page 16 of 28
Automation in Medicine and the Principle of Confidence
Support Vector Machines for Automated Brain Analysis
A Case Against Levels of Automation in Medical AI Systems
11 Page 20 of 28
Deep Learning for Automated Breast Tissue Analysis
A Case Against Purely Data‐driven Approaches in Medical AI Systems
Conclusions
11 Page 24 of 28
11 Page 26 of 28
11 Page 28 of 28
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call