Abstract

The rapid deployment of semi-autonomous systems (i.e., systems requiring human monitoring such as Uber AVs) poses ethical challenges when these systems face morally-laden situations. We ask how people evaluate morally-laden decisions of humans who monitor these systems in situations of unavoidable harm. We conducted three pre-registered experiments (total N = 1811), using modified trolley moral dilemma scenarios. Our findings suggest that people apply different criteria when judging morality and deserved punishment of regular-car versus AV drivers. Regular-car drivers are judged according to a consequentialist minimizing harm criterion, whereas AV drivers are judged according to whether or not they took action, with a more favorable prior for acting. Integrating judgment and decision-making research with moral psychology, the current research illuminates how the presence versus absence of automation affects moral judgments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call