Abstract
Humans increasingly use automated decision aids. However, environmental uncertainty means that automated advice can be incorrect, creating the potential for humans to act on incorrect advice or to disregard correct advice. We present a quantitative model of the cognitive process by which humans use automation when deciding whether aircraft would violate requirements for minimum separation. The model closely fitted the performance of 24 participants, who each made 2,400 conflict-detection decisions (conflict vs. nonconflict), either manually (with no assistance) or with the assistance of 90% reliable automation. When the decision aid was correct, conflict-detection accuracy improved, but when the decision aid was incorrect, accuracy and response time were impaired. The model indicated that participants integrated advice into their decision process by inhibiting evidence accumulation toward the task response that was incongruent with that advice, thereby ensuring that decisions could not be made solely on automated advice without first sampling information from the task environment.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.