Abstract

A model to account for the pilot's processing of visual flowfield cues during low-level flight over uncultured terrain is described. The model is predicated on the notion that the pilot makes noisy, sampled measurements on the spatially distributed visual flowfield surrounding him, and, on the basis of these measurements, generates estimates of his own linear and angular terrain-relative velocities which optimally satisfy, in a least-squares sense, the visual kinematic flow constraints. A subsidiary but significant output of the model is an time map, an observer-centered spatially scaled replica of the viewed surface. Simulation results are presented to demonstrate the potential for modeling relevant human visual performance data and evaluating candidate simulator configurations in terms of expected impact on the perceptual performance of the terrain-following pilot. Additional model applications are discussed, including interfacing with other human performance models and modeling other types of visually driven human task performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call