Abstract

Studies have demonstrated that humans appear to apply norms of humanhuman interaction to their interaction with machines. Yet, there exist subtle differences in peoples' perceptions of automated aids compared to humans. We examined factors differentiating human-human and human-automation interaction, wherein participants (n = 180) performed a luggage-screening task with the assistance of human or automated advisers that differed in pedigree (expert vs. novice) and reliability (high vs. low). Dependence on advice was assessed. Participants agreed more with an automated 'novice' than a human 'novice' suggesting a bias toward automation. Automation biases broke down when automated aids portrayed as 'experts' generated errors, leading to a drop in compliance and reliance on automation relative to humans. The results have implications for the development of theoretical and computational models of optimal user dependence on decision aids.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call