Abstract

We examined how human operator trust in navigational assistance differed when the assistance was human vs autonomous. As autonomy becomes ever more ubiquitous, it is critical to understand how trust in autonomous systems differs from that in another human. Benign navigational assistance was provided by either another human or an autonomous system and presented in an identical manner. Half of the subjects were deceived and told the assistance was provided by the opposite source. We quantified trust by how closely subjects' rover driving actions aligned with recommendations given by the navigational assistant. This metric of trust is objective, continuous, and unobtrusive. In addition, subjects self-reported their trust in the system after the experiment using a standard trust questionnaire. The presence of the navigational assistance changed subject behavior (p = 0.002) but there was not a significant difference between trust in the human and autonomous navigational assistance systems. This suggests that our subject pool was not more or less trusting in an autonomous system, as compared to assistance from another human, particularly when controlling for the system's efficacy. Self-reported trust on the post-experiment questionnaire correlated with objectively measured trust on difficult rover operating scenarios (p = 0.01, r = 0.45). Our findings inform future human-autonomy teaming design choices and provide a unique approach to quantify operator trust. Potential applications include crewed deep space missions where communication delays may require ground controllers to be replaced with onboard autonomous systems while maintaining and quantifying trust throughout.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call