Abstract

This paper has three concerns: first, it represents an etymological and genealogical study of the phenomenon of responsibility. Secondly, it gives an overview of the three fields of robot ethics as a philosophical discipline and discusses the fundamental questions that arise within these three fields. Thirdly, it will be explained how in these three fields of robot ethics is spoken about responsibility and how responsibility is attributed in general. As a philosophical paper, it presents a theoretical approach and no practical suggestions are made as to which robots should bear responsibility under which circumstances or how guidelines should be formulated in which a responsible use of robots is outlined.

Highlights

  • It is currently assumed that technological developments are radically changing our understanding of the concept of and the possibilities of ascribing responsibility

  • The assumption of a transformation of responsibility is fed on the one hand by the fundamental upheavals in the nature of ‘the’ human being, which are attributed to the development of autonomous, self-learning robots

  • If not in ethics, should we discuss the potential value of artificial systems? As for the second accusation, there is not a lot to answer; this criticism applies to robot ethics, but to any ethics restricted to a specific context, as long as we agree on the human being as origin and pivot of ethical reflection per se

Read more

Summary

Introduction

It is currently assumed that technological developments are radically changing our understanding of the concept of and the possibilities of ascribing responsibility. One speaks of radical paradigm shifts and a corresponding transformation of our understanding of responsibility in the organizational forms of our social, political, and economic systems due to the challenges posed by robotization, automation, digitization, and industry 4.0. The second is that even if it is justifiable to include artificial systems in ethical reflection, they do not raise any questions that have not been asked long before in more traditional ethical arenas. As for the second accusation, there is not a lot to answer; this criticism applies to robot ethics, but to any ethics restricted to a specific context (such as animal ethics, climate ethics, and health care ethics), as long as we agree on the human being as origin and pivot of ethical reflection per se. Which competences define agency? What are the prerequisites for moral agency? With what moral values should artificial systems be equipped? What moral self-understanding underlies ‘bad’ behavior towards robots? In what areas of human specialisms—be it industry, military, medicine, elderly care, service, or others—do we still want to rely (partly or significantly) on human rather than artificial expertise? It is intuitively evident that questions of ascribing, delegating, sharing and dividing responsibility are raised in these spheres

What is Responsibility1
The Subject of Responsibility
The Object of Responsibility
The Authority of Responsibility
The Addressee of Responsibility
The Normative Criteria of Responsibility
What Is Robot Ethics?
The Three Fields of Robot Ethics
Ascribing Responsibility in Man-Robot Interaction9
Robots as Moral Agents and the Prerequisites for Ascribing Responsibility
Robots as Moral Patients and the Relational Elements of Responsibility
Inclusive Approaches in Robot Ethics
Conclusions

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.