Robots and smart software have an increasing impact on our lives, and they make decisions that might have a profound effect on our welfare. Some of these decisions have a moral dimension. Hence, we need to consider whether (a) we want them making such decisions, and (b) if affirmative, how we proceed in equipping machines with ‘‘moral sensitivity’’ or even with ‘‘moral decision-making abilities.’’ Wallach and Allen make an eloquent and forceful case that we should seriously consider granting machines such decision-making power in their book, Moral machine, teaching robots right from wrong. Their argument (in Chaps. 1 and 2) is that machines are deployed in situations in which they make decisions that have a moral impact. Hence we should extend them with moral sensitivity to the moral dimensions of the situations in which the increasingly autonomous machines will inevitably find themselves. This may lead to machines making moral decisions. The machines they refer to may be anything from software, softbots to robots, and in particular combinations of these. Through interconnected and open systems situations might arise that are neither desirable nor were they foreseeable when the systems were designed. Whether we can actually build such systems Chap. 3) is still an open question. If we were to engineer artificially moral systems, would they count as truly moral systems? Wallach and Allen conclude (Chap. 4) by noting that human and artificial morality will be different, but that there is no reason a priori to rule out the notion of artificial morality. Moreover, they argue that the very attempt to construct artificial morality will prove worthwhile for all involved. Raising these points is the first, and possibly the greatest, strength of their book. It puts the theme squarely on the agenda. Yet, theirs is also a book of open and unanswered questions. On virtually all topics, the verdict is still out, as no common opinions have been established, no approaches proven, and no answers found. Their book also serves to illustrate how young this field of research still is. Though at times it is a little disconcerting to find yet again that the answer (one of these open questions) might be A, but then again, it might not. Writing a book that touches on several research domains—in this case moral philosophy, robotics, software developing, and neuroscience—is always a hazardous enterprise. The risk of not providing enough depth and thus losing the attention of specialists in either domain is present. The specialist will be lost unless there is enough to be learned from other domains to provide a fresh perspective on the research in his own domain. Providing an overview of the research on artificial morality—moral philosophy and machine decisionmaking—is a tall order. Though the field is relatively new, there is already much and widely varied research being conducted, which ranges from moral learning algorithms and various logics to model moral decision-making, and from neural nets to nano-technology. Overall, the book provides a good overview of most of the current research in the field, nicely setting the stage in Chap. 5 for a discussion of the relationship between engineer and philosopher; and the cooperation between the two raises various issues that occupy the remainder of the book, including questions such as: Who or what is leading? How can philosophers formulate their theories such that engineers V. Wiegel (&) Section of Philosophy, Delft University of Technology, PO Box 5015, 2600 GA Delft, The Netherlands e-mail: v.wiegel@tudelft.nl
Read full abstract