Abstract

We describe a form of moral artificial intelligence that could be used to improve human moral decision-making. We call it the “artificial moral advisor” (AMA). The AMA would implement a quasi-relativistic version of the “ideal observer” famously described by Roderick Firth. We describe similarities and differences between the AMA and Firth’s ideal observer. Like Firth’s ideal observer, the AMA is disinterested, dispassionate, and consistent in its judgments. Unlike Firth’s observer, the AMA is non-absolutist, because it would take into account the human agent’s own principles and values. We argue that the AMA would respect and indeed enhance individuals’ moral autonomy, help individuals achieve wide and a narrow reflective equilibrium, make up for the limitations of human moral psychology in a way that takes conservatives’ objections to human bioenhancement seriously, and implement the positive functions of intuitions and emotions in human morality without their downsides, such as biases and prejudices.

Highlights

  • Suppose you need to bin your empty cup

  • In this paper we describe a form of Bmoral artificial intelligence^ (Savulescu and Maslen 2015), i.e., a type of software that would give us moral advice more quickly and more efficiently than our brain could ever do, on the basis of moral criteria we input

  • Even in our hypertechnological world—so the slogan suggested—we cannot rely on computers to find moral answers. We have challenged this assumption by proposing a form of artificial intelligence that could assist us in making better, including better informed, moral decisions

Read more

Summary

Introduction

Suppose you need to bin your empty cup. Because you have an ethical commitment to respecting the environment, you want the cup to be recycled. The software is a type of artificial intelligence capable of gathering information from the environment, processing it according to certain operational criteria we provide—for example, moral criteria such as moral values, goals, and principles—, and of advising on the morally best thing to do, i.e., on which option best meets our moral criteria (Savulescu and Maslen 2015) In this way, the software would perform all the activities that would allow us to make (nearly) optimal moral choices but that we, as humans, usually do not and/or cannot perform because of lack of the necessary mental resources or time (for instance going through all the possible options and assessing all the expected utilities of our possible choices). The AMA might be programmed to provide the agent not just with the best piece of moral advice, but with a range of options, signaling the one which more closely complies with the agent’s moral standards and presenting a rank of other options based on the degree of compliance with such standards

The Artificial Moral Advisor and the Ideal Observer
The Expertise of the Artificial Moral Advisor
Objections
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call