Abstract

A controversial question that has been hotly debated in the emerging field of robot ethics is whether robots should be granted rights. Yet, a review of the recent literature in that field suggests that this seemingly straightforward question is far from clear and unambiguous. For example, those who favor granting rights to robots have not always been clear as to which kinds of robots should (or should not) be eligible; nor have they been consistent with regard to which kinds of rights—civil, legal, moral, etc.—should be granted to qualifying robots. Also, there has been considerable disagreement about which essential criterion, or cluster of criteria, a robot would need to satisfy to be eligible for rights, and there is ongoing disagreement as to whether a robot must satisfy the conditions for (moral) agency to qualify either for rights or (at least some level of) moral consideration. One aim of this paper is to show how the current debate about whether to grant rights to robots would benefit from an analysis and clarification of some key concepts and assumptions underlying that question. My principal objective, however, is to show why we should reframe that question by asking instead whether some kinds of social robots qualify for moral consideration as moral patients. In arguing that the answer to this question is “yes,” I draw from some insights in the writings of Hans Jonas to defend my position.

Highlights

  • IntroductionIn the emerging field of robot ethics—a branch of applied ethics as well as artificial intelligence (AI) that is sometimes referred to as “robo-ethics” [1,2] and “machine ethics” [3,4]—a controversial question that continues to be hotly debated is whether or not we should grant rights to robots

  • In the emerging field of robot ethics—a branch of applied ethics as well as artificial intelligence (AI) that is sometimes referred to as “robo-ethics” [1,2] and “machine ethics” [3,4]—a controversial question that continues to be hotly debated is whether or not we should grant rights to robots. This question may seem fairly straightforward, a review of the recent literature on that topic suggests otherwise. This question is ambiguous and imprecise with respect to at least five critical points, which in turn raise five distinct and important sub-questions: (i) Which kinds of robots deserve rights? (ii) Which kinds of rights do these robots deserve? (iii) Which criterion, or cluster of criteria, would be essential for determining when a robot could qualify for rights? (iv) Does a robot need to satisfy the conditions for agency in order to qualify for at least some level of moral consideration? (v) Assuming that certain kinds of robots may qualify for some level of moral consideration, which kind of rationale would be considered adequate for defending that view?

  • If we extend the analogy involving animals to social robots, it would seem to follow that if the latter are capable of exhibiting sentience, of some sort, they could qualify as moral patients and warrant some level of moral consideration

Read more

Summary

Introduction

In the emerging field of robot ethics—a branch of applied ethics as well as artificial intelligence (AI) that is sometimes referred to as “robo-ethics” [1,2] and “machine ethics” [3,4]—a controversial question that continues to be hotly debated is whether or not we should grant rights to robots. Others have suggested that certain kinds of socially intelligent robots could be eligible for rights, or at least could qualify for some level of moral consideration, because of their status as relational entities and the way in which humans relate socially to these kinds of robots (see, for example, Coekelbergh [6]). 215), who defines a social robot as a “physically embodied, autonomous agent that communicates and interacts with humans on a social level,” believes that we need to distinguish social robots from industrial robots and from other kinds of service robots that are “not designed to be social” In drawing this distinction, she describes some specific examples of early social robots, which include a range of “interactive robotic toys” such as Pleo (a robotic dinosaur), Aibo (a robotic dog), and Paro (a robotic baby seal), as well as research robots such as MIT’s Cog and Kismet. In limiting our analysis to the category of social robots, as defined above in Darling (p. 215), we can avoid having to consider (for at least) whether rights can/should be granted to some kinds of softbots, as well as to some sophisticated service robots that do not satisfy all of the conditions for our working definition of a social robot

The “Rights” Question
The “Criterion” Question
The “Rationale” Question
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call