Abstract

We report two experiments in which pilots' attention is occasionally directed to inappropriate or inaccurate locations in space, replicating the effects of imperfect automation. A general taxonomy of human performance costs in these situations is presented. In Experiment 1, pilots are engaged in an air-ground targeting scenario. Target cueing, based upon semi-reliable sensor information, sometimes directs attention away from the true target. Yet pilots follow such guidance, even knowing its unreliability, a result of the difficulty of the unaided task. In Experiment 2, pilots in a free flight simulation are engaged in a series of traffic conflict avoidance maneuvers, using a cockpit display of traffic information (CDTI). On rare trials the CDTI knowledge of the traffic intruder's intentions, reflected in a predictor symbol, is unreliable and does not correspond with the actual aircraft behavior. Yet pilots' avoidance behavior is governed by the predictor symbol, and a display manipulation that calls attention to the inaccuracy of the predictor does little to influence pilots' reliance upon the predictor symbol although it does reduce visual workload. The data are interpreted in terms of appropriate trust calibration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call