Abstract

Despite knowing exactly what an object looks like, searching for it in a person's visual field is a time-consuming and error-prone experience. In Augmented Reality systems, new algorithms are proposed to speed up search time and reduce human errors. However, these algorithms might not always provide 100% accurate visual cues, which might affect users' perceived reliability of the algorithm and, thus, search performance. Here, we examined the detrimental effects of automation bias caused by imperfect cues presented in the Augmented Reality head-mounted display using the YOLOv5 machine learning model. 53 participants in the two groups received either 100% accurate visual cues or 88.9% accurate visual cues. Their performance was compared with the control condition, which did not include any additional cues. The results show how cueing may increase performance and shorten search times. The results also showed that performance with imperfect automation was much worse than perfect automation and that, consistent with automation bias, participants were frequently enticed by incorrect cues.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.