Abstract

Previous work has shown that people provide different moral judgments of robots and humans in the case of moral dilemmas. In particular, robots are blamed more when they fail to intervene in a situation in which they can save multiple lives but must sacrifice one person's life. Previous studies were all conducted with U.S. participants; the present two experiments provide a careful comparison of moral judgments among Japanese and U.S. participants. The experiments assess multiple ways in which cross-cultural differences in moral evaluations may emerge: in the willingness to treat robots as moral agents; the norms that are imposed on robots' behaviors; and the degree of blame that accrues to them when they violate the imposed norms. Even though Japanese and U.S. participants differ to some extent in their treatment of robots as moral agents and in the particular norms they impose on them, the two cultures show parallel patterns of greater blame for robots who fail to intervene in moral dilemmas.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call