Abstract

Artificial agents such as robots, chatbots, and artificial intelligence systems can be the perpetrators of a range of moral violations traditionally limited to human actors. This paper explores how people perceive the same moral violations differently for artificial agent and human perpetrators by addressing three research questions: How wrong are moral foundation violations by artificial agents compared to human perpetrators? Which moral foundations do artificial agents violate compared to human perpetrators? What leads to increased blame for moral foundation violations by artificial agents compared to human perpetrators? We adapt 18 human-perpetrated moral violation scenarios that differ by the moral foundation violated (harm, unfairness, betrayal, subversion, degradation, and oppression) to create 18 agent-perpetrated moral violation scenarios. Two studies compare human-perpetrated to agent-perpetrated scenarios. They reveal that agent-perpetrated violations are more often perceived as not wrong or violating a different foundation than their human counterparts. People are less likely to classify violations by artificial agents as oppression and subversion, the foundations that deal the most with group hierarchy. Finally, artificial agents are blamed less than humans across moral foundations, and this blame is based more on the agent's ability and intention for every moral foundation except harm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call