Abstract

Although developing successful autonomous vehicles for practical use is of interest, the establishment of moral autonomous vehicles aligned with human values is also crucial. Prior studies have mainly examined moral decision making in the trolley dilemma of autonomous vehicles, i.e., the driverless dilemma. Relatively little is known, however, about the passenger acceptance (specifically like, use, trust, and communication) of autonomous vehicles in the driverless dilemma. Results of a correlational study (Study 1) and an experimental study (Study 2) found that participants as the passengers were more likely to like, use, trust, and communicate with autonomous vehicles programmed to protect self than protect others and be random in a one-passenger-one-pedestrian scenario representing the one-to-one dilemma. However, in most conditions participants showed no preference for either pro-self, pro-social, or random algorithms in the one-passenger-several-pedestrian scenarios implying the utilitarian dilemma (Studies 1 and 2). The variation of passenger did not affect acceptance of autonomous vehicles in the driverless dilemma (Study 2).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call