Abstract

Robots are becoming an integral part of society, yet our moral stance toward these non-living objects is unclear. In two experiments, we investigated whether anthropomorphic appearance and anthropomorphic attributions modulated people's utilitarian decision making about robotic agents. In Study 1, participants were presented with moral dilemmas in which the to-be-sacrificed agent was either a human, a human-like robot, or a machine-like robot. These victims were described in either neutral or anthropomorphic priming stories. Study 2 teased apart anthropomorphic attributions of agency and affect. Results indicate that although robot-like robots were sacrificed significantly more often than humans and humanlike robots, the effect of humanized priming was the same for all three agent types (Study 1), and this effect was mainly due to the attribution of affective states rather than agency (Study 2). That is, when people attribute affective states to robots, they are less likely to sacrifice them in order to save humans.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call