Abstract

Autonomous cars (ACs) controlled by artificial intelligence are expected to play a significant role in transportation in the near future. This study investigated determinants of trust in ACs. Trust in ACs influences different variables, including the intention to adopt AC technology. Several studies on risk perception have verified that shared value determines trust in risk managers. Previous research has confirmed the effect of value similarity on trust in artificial intelligence. We focused on moral beliefs, specifically utilitarianism (belief in promoting a greater good) and deontology (belief in condemning deliberate harm), and tested the effects of shared moral beliefs on trust in ACs. We conducted three experiments (N = 128, 71, and 196, for each), adopting a thought experiment similar to the well-known trolley problem. We manipulated shared moral beliefs (shared vs. unshared) and driver (AC vs. human), providing participants with different moral dilemma scenarios. Trust in ACs was measured through a questionnaire. The results of Experiment 1 showed that shared utilitarian belief strongly influenced trust in ACs. In Experiment 2 and Experiment 3, however, we did not find statistical evidence that shared deontological belief had an effect on trust in ACs. The results of the three experiments suggest that the effect of shared moral beliefs on trust varies depending on the values that ACs share with humans. To promote AC implementation, policymakers and developers need to understand which values are shared between ACs and humans to enhance trust in ACs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call