Abstract

Human trust in automation is widely studied because the level of trust influences the effectiveness of the system (Muir, 1994). It is vital to examine the role that the people play and how they interact with the system (Hoff & Bashir, 2015). In the decision-making literature, an interesting phenomenon is the description-experience gap, with a typical finding that experience-based choices underweight small probabilities, whereas description-based choices overweight small probabilities (Hertwig, Barron, Weber, & Erev, 2004; Hertwig & Erev, 2009; Jessup, Bishara, & Busemeyer, 2008). We applied this description-experience gap concept to the study of human-automation interaction and had Amazon Mechanical Turk workers evaluate emails as legitimate or phishing. An anti-phishing warning system provided recommendations to the user with a reliability level of 60%, 70%, 80%, or 90%. Additionally, the way in which reliability information was conveyed was manipulated with two factors: (1) whether the reliability level of the system was stated explicitly (i.e., description); (2) whether feedback was provided after the user made each decision (i.e., experience). Our results showed that as the reliability of the warning system increased, so did decision accuracy, agreement rate, self-reported trust, and perceived system reliability, consistent with prior research (Lee & See, 2004; Rice, 2009; Sanchez, Fisk, & Rogers, 2004). The increase in performance and trust with the increase in reliability indicates that participants were paying attention to and using the automation to make decisions. Feedback was also highly influential in performance and establishing trust, but description only affected self-reported trust. The effect of feedback strengthened at the higher levels of reliability, showing that individuals benefited the most from feedback when the automated warning system was more reliable. Additionally, unlike prior studies that manipulated description and experience/feedback separately (Hertwig, 2012), we varied description and feedback conditions systematically and discovered an interaction between the two factors. Our results show that feedback is more helpful in situations that do not provide an explicit description of the system reliability, compared to those who do. An implication of the current results for system design is that feedback should be provided whenever possible. This recommendation is based on the finding that providing feedback benefited both users’ performance and trust in the system, and on the hope that the systems in use are mostly of high reliability (e.g., > .80). A note for researchers in the field of human trust in automation is that, if only subjective measures of trust are used in a study, providing description of the system reliability will likely cause an inflation in the trust measures.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call