Abstract

Despite knowing exactly what an object looks like, searching for it in a person's visual field is a time-consuming and error-prone experience. In Augmented Reality systems, new algorithms are proposed to speed up search time and reduce human errors. However, these algorithms might not always provide 100% accurate visual cues, which might affect users' perceived reliability of the algorithm and, thus, search performance. Here, we examined the detrimental effects of automation bias caused by imperfect cues presented in the Augmented Reality head-mounted display using the YOLOv5 machine learning model. 53 participants in the two groups received either 100% accurate visual cues or 88.9% accurate visual cues. Their performance was compared with the control condition, which did not include any additional cues. The results show how cueing may increase performance and shorten search times. The results also showed that performance with imperfect automation was much worse than perfect automation and that, consistent with automation bias, participants were frequently enticed by incorrect cues.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call