Abstract

Can a machine be a genuine cause of harm? The obvious answer is affirmative. The toaster that flames up and burns down a house is said to be the cause of the fire, and in some weak sense, we might even say that the toaster was responsible for it; but the toaster is broken or defective, not immoral and irresponsible, though possibly the engineer who designed it is. But what about machines that decide things before they act, that determine their own course of action? Somewhere between digital thermostats and the murderous HAL of 2001: A Space Odyssey, autonomous machines are quickly gaining in complexity, and most certainly a day is coming when we will want to blame them for genuinely causing harm, even if philosophical issues concerning their moral status have not been fully settled. When will that be? Without lapsing into futurology or science fiction, Wallach and Allen predict that within the next few years, ‘‘there will be a catastrophic incident brought about by a computer system making a decision independent of human oversight’’ (p. 4). In this light, philosophers and engineers should not wait for a threat of robot domination before determining how to keep the behavior of machines within the scope of morality. The practical concerns to motivate such an inquiry—and indeed this book—are already here. Moral Machines is an introduction to this newly emerging area of machine ethics. It is written primarily to stimulate further inquiry by both ethicists and engineers, and as such, it does not get bogged down in dense philosophical prose or technical specification. It is, in other words, comprehensible by the general reader, who will walk away informed about why machine morality is already necessary, where we are with various attempts to implement it, and the authors’ recommendations of where we need to be. Chapter One notes the inevitable arrival of autonomous machines and the possible harm that can come from them. Some automated agents that are quickly integrating into modern life do things like regulate the power grid in the United States, monitor financial transactions, make medical diagnoses and fight on the battlefield. A failure of these systems to behave within moral parameters could have devastating consequences. As they become more and more autonomous, Wallach and Allen argue, it becomes more and more necessary that they employ ‘‘ethical subroutines’’ to evaluate their possible actions before they are executed. Chapter Two notes that machine morality should unfold in the dynamic interplay between ethical sensitivity and increasingly complex autonomy, and several candidate models for automated moral agents, or AMAs, are presented. Borrowing from Moor, the authors indicate that machines can be implicitly ethical in that their behavior conforms to moral standards. Moor marks a three-fold division among kinds of ethical agents: such agents are either ‘‘implicit,’’ ‘‘explicit’’ or ‘‘full’’. The first are constrained to emulate ethical behavior, whereas the second engage in ethical decisions making and the third are, like human beings, conscious and have free will. Robots, Wallach and Allen argue, are capable of being the first, while setting the question of whether they can be explicit or full ethical agents to the side. After a brief digress in Chapter Three to address whether we really want machines making moral decisions, the issue of agency reappears in Chapter Four, where the ingredients of full moral agency (free will, understanding and consciousness) are addressed. This review is a slightly revised version of another that originally appeared in the January/February 2009 issue of Philosophy Now.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call