Abstract

Future Care Robots (CRs) should be able to balance a patient’s, often conflicting, rights without ongoing supervision. Many of the trade-offs faced by such a robot will require a degree of moral judgment. Some progress has been made on methods to guarantee robots comply with a predefined set of ethical rules. In contrast, methods for selecting these rules are lacking. Approaches departing from existing philosophical frameworks, often do not result in implementable robotic control rules. Machine learning approaches are sensitive to biases in the training data and suffer from opacity. Here, we propose an alternative, empirical, survey-based approach to rule selection. We suggest this approach has several advantages, including transparency and legitimacy. The major challenge for this approach, however, is that a workable solution, or social compromise, has to be found: it must be possible to obtain a consistent and agreed-upon set of rules to govern robotic behavior. In this article, we present an exercise in rule selection for a hypothetical CR to assess the feasibility of our approach. We assume the role of robot developers using a survey to evaluate which robot behavior potential users deem appropriate in a practically relevant setting, i.e., patient non-compliance. We evaluate whether it is possible to find such behaviors through a consensus. Assessing a set of potential robot behaviors, we surveyed the acceptability of robot actions that potentially violate a patient’s autonomy or privacy. Our data support the empirical approach as a promising and cost-effective way to query ethical intuitions, allowing us to select behavior for the hypothetical CR.

Highlights

  • Care Robots (CRs) have been proposed as a means of relieving the disproportional demand the growing group of elderly people places on health services (e.g. [13,29,31,58])

  • An increased autonomy implies that smart care robots should be able to balance a patient’s, often conflicting, rights without ongoing supervision

  • The first aim of this paper is to propose an approach for rule selection for CRs, complementary to existing approaches

Read more

Summary

Introduction

Care Robots (CRs) have been proposed as a means of relieving the disproportional demand the growing group of elderly people places on health services (e.g. [13,29,31,58]). A number of research groups have developed methods to implement a chosen set of ethical rules in robots (e.g., [7,8,44,61,63,64]) This field is in its infancy [26]. Researchers have derived ethical rules from frameworks such as utilitarianism Pontier and Hoorn [52], Kantian deontology [33], and the Universal Declaration of Human Rights [57]. These top-down approaches have failed to yield practically relevant rules for guiding CR behavior. Selecting an ethical framework is a thorny issue in itself

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call