Abstract

Trust has been identified as a critical factor in the success and safety of interaction with automated systems. Researchers have referred to “trust calibration” as an apt design goal– user trust should be at an appropriate level given a system’s reliability. One factor in user trust is the degree to which a system is perceived as humanlike, or anthropomorphic. However, relevant prior work does not explicitly characterize trust appropriateness, and generally considers visual rather than behavioral anthropomorphism. To investigate the role of humanlike system behavior in trust calibration, we conducted a 2 (communication style: machinelike, humanlike) \(\times \) 2 (reliability: low, high) between-subject study online where participants collaborated alongside an Automated Target Detection (ATD) system to classify a set of images in 5 rounds of gameplay. Participants chose how many images to allocate to the automation before each round, where appropriate trust was defined by a number of images that optimized performance. We found that communication style and reliability influenced perceptions of anthropomorphism and trustworthiness. Low and high reliability participants demonstrated overtrust and undertrust, respectively. The implications of our findings for the design and research of automated and autonomous systems are discussed in the paper.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call