Abstract

Trust calibration for a human–machine team is the process by which a human adjusts their expectations of the automation’s reliability and trustworthiness; adaptive support for trust calibration is needed to engender appropriate reliance on automation. Herein, we leverage an instance-based learning ACT-R cognitive model of decisions to obtain and rely on an automated assistant for visual search in a UAV interface. This cognitive model matches well with the human predictive power statistics measuring reliance decisions; we obtain from the model an internal estimate of automation reliability that mirrors human subjective ratings. The model is able to predict the effect of various potential disruptions, such as environmental changes or particular classes of adversarial intrusions on human trust in automation. Finally, we consider the use of model predictions to improve automation transparency that account for human cognitive biases in order to optimize the bidirectional interaction between human and machine through supporting trust calibration. The implications of our findings for the design of reliable and trustworthy automation are discussed.

Highlights

  • The interpersonal trust literature asserts that trust in another person is influenced by indicators of trustworthiness such as loyalty (John et al, 1984), integrity (Mayer et al, 1995), and competence (Grover et al, 2014)

  • In Blaha et al (2020), we developed a computational model of the trust and reliance calibration process using instance-based learning theory (Gonzalez et al, 2003; Gonzalez and Dutt, 2011) integrated into the ACT-R computational cognitive architecture (Anderson et al, 2004)

  • We introduce a general methodology for studying effects of trust calibration and reliance in automation

Read more

Summary

Introduction

The interpersonal trust literature asserts that trust in another person is influenced by indicators of trustworthiness such as loyalty (John et al, 1984), integrity (Mayer et al, 1995), and competence (Grover et al, 2014). According to Lee and See (2004), trust calibration is part of a closed-loop process that influences an operator’s intentions and, decisions to rely on the automation (or not) In their process model, trust in the automation evolves based on performance feedback from the technology, organizational influences, cultural differences, and one’s propensity to trust. Trust in the automation evolves based on performance feedback from the technology, organizational influences, cultural differences, and one’s propensity to trust All of these factors likely influence trust evolution, a recently developed cognitive model of trust calibration suggests system feedback plays a powerful role in trust calibration (Blaha et al, 2020)

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call