Abstract

Robots will eventually perform norm-regulated roles in society (e.g. caregiving), but how will people apply moral norms and judgments to robots? By answering such questions, researchers can inform engineering decisions while also probing the scope of moral cognition. In previous work, we compared people's moral judgments about human and robot agents' behavior in moral dilemmas. We found that robots, compared with humans, were more commonly expected to sacrifice one person for the good of many, and they were blamed more than humans when they refrained from that decision. Thus, people seem to have somewhat different normative expectations of robots than of humans. In the current project we analyzed in detail the justifications people provide for three types of moral judgments (permissibility, wrongness, and blame) of robot and human agents. We found that people's moral judgments of both agents relied on the same conceptual and justificatory foundation: consequences and prohibitions undergirded wrongness judgments; attributions of mental agency undergirded blame judgments. For researchers, this means that people extend moral cognition to nonhuman agents. For designers, this means that robots with credible cognitive capacities will be considered moral agents but perhaps regulated by different moral norms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call