Abstract

The pursuit of AMAs is complicated. Disputes about the development, design, moral agency, and future projections for these systems have been reported in the literature. This empirical study explores these controversial matters by surveying (AI) Ethics scholars with the aim of establishing a more coherent and informed debate. Using Q-methodology, we show the wide breadth of viewpoints and approaches to artificial morality. Five main perspectives about AMAs emerged from our data and were subsequently interpreted and discussed: (i) Machine Ethics: The Way Forward; (ii) Ethical Verification: Safe and Sufficient; (iii) Morally Uncertain Machines: Human Values to Avoid Moral Dystopia; (iv) Human Exceptionalism: Machines Cannot Moralize; and (v) Machine Objectivism: Machines as Superior Moral Agents. A potential source of these differing perspectives is the failure of Machine Ethics to be widely observed or explored as an applied ethic and more than a futuristic end. Our study helps improve the foundations for an informed debate about AMAs, where contrasting views and agreements are disclosed and appreciated. Such debate is crucial to realize an interdisciplinary approach to artificial morality, which allows us to gain insights into morality while also engaging practitioners.

Highlights

  • The development of Artificial Moral Agents (AMAs), i.e., artificial systems displaying varying degrees of moral reasoning, is an open discussion within the realm of Artificial Intelligence (AI)

  • About the development of AMAs, Machine Ethics: The Way Forward (P1), which stands for advancing artificial morality, is in sharp contrast with Ethical Verification: Safe and Sufficient (P2), which is skeptical about the feasibility or need for artificial morality

  • We expect that the marginal agreements reported in this research, about the inevitability of AI in morally salient contexts and the need for moral competence, are further explored and developed into shared research principles. This empirical study explored the controversial topic of AMAs and aimed to establish an informed debate, where contrasting views and agreements are disclosed and appreciated

Read more

Summary

Introduction

The development of Artificial Moral Agents (AMAs), i.e., artificial systems displaying varying degrees of moral reasoning, is an open discussion within the realm of Artificial Intelligence (AI). The endeavor of developing such an AMA is central to the Machine Ethics project [3, 6] and it is quite controversial [39, 55, 84]. Empirically evaluated AMAs include GenEth, a general ethical dilemma analyzer that utilizes inductive logic programming to learn new ethical principles in-situ [7], and Vanderelst and Winfeld’s consequentialist machine, which relies on functional imagination simulations to predict moral consequences [75]. Most of the controversies surrounding AMAs concern projected AI Systems that rank high on the autonomy/ethics sensitivity spectrum [78]

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call