Abstract

This study explores whether there is a predictive relationship between humans’ ethical perspectives and their trust in autonomous systems (AS). Whether AS can make ethically acceptable decisions, especially in safety-critical situations involving value trade-offs, has become a significant determinant of how humans will trust these systems. However, knowledge about these relation-based trust dimensions is largely absent from the current theoretical framework of human-AS trust. Addressing this gap, this study used MTurk and Qualtrics to perform an online survey that assessed people’s ethical perspectives, trust in automation, and propensity to trust. The results showed that: (1) Significant differences in trust in automation were seen across four ethical perspectives, confirming the predictive relationship between human ethical perspectives and AS trust. (2) There was no significant difference in propensity to trust among ethical orientations. (3) There was a positive but weak association between trust in automation and willingness to trust. The latter two observations jointly show that trust in automation and trust propensity may be regulated by distinct mechanisms. This study has contributed to existing knowledge by (1) validating the predictive relationship between human ethical perspectives and how they trust AS, (2) revealing potential mechanisms underlying such discrepancies, and (3) highlighting how these differences could help the design of trustworthy AS.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call