Abstract

The main objective of this paper is to demonstrate why AI researchers’ attempts at developing projects of moral machines are a cause for concern regarding the way in which such machines can reach a certain level of morality. By comparing and contrasting Howard and Muntean’s model of a virtuous Artificial Autonomous Moral Agent (AAMA) (2017) and Bauer’s model of a two-level utilitarian Artificial Moral Agent (AMA) (2020), I draw the conclusion that both models raise, although in a different manner, some crucial issues. The latter are recognized as deriving from the complex relationships between human cognition and moral reasoning, as refracted through the lens of the idea of moral AI. In this context, special attention is paid to the complications which are triggered by the analogical thinking regarding the processes of replication of human morality in the field of machine ethics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call