Abstract

The goal of this research was to determine how individuals perform and allocate their visual attention when monitoring multiple automated displays that differ in automation reliability. Ninety-six participants completed a simulated supervisory control task where each automated display had a different level of reliability (namely 70%, 85% and 95%). In addition, participants completed a high and low workload condition. The performance data revealed that (1) participants’ failed to detect automation misses approximately 2.5 times more than automation false alarms, (2) participants’ had worse automation failure detection in the high workload condition and (3) participant automation failure detection remained mostly static across reliability. The eye tracking data revealed that participants spread their attention relatively equally across all three of the automated displays for the duration of the experiment. Together, these data support a system-wide trust approach as the default position of an individual monitoring multiple automated displays.Practitioner Summary: Given the rapid growth of automation throughout the workforce, there is an immediate need to better understand how humans monitor multiple automated displays concurrently. The data in this experiment support a system-wide trust approach as the default position of an individual monitoring multiple automated displays.Abbreviations: DoD: Department of Defense; UA: unmanned aircraft; SCOUT: Supervisory Control Operations User Testbed; UAV: unmanned aerial vehicle; AOI: areas of interest

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call