Abstract

A model to account for the pilot's processing of visual flowfield cues during low-level flight over uncultured terrain is described. The model is predicated on the notion that the pilot makes noisy, sampled measurements on the spatially distributed visual flowfield surrounding him, and, on the basis of these measurements, generates estimates of his own linear and angular terrain-relative velocities which optimally satisfy, in a least-squares sense, the visual kinematic flow constraints. A subsidiary but significant output of the model is an time map, an observer-centered spatially scaled replica of the viewed surface. Simulation results are presented to demonstrate the potential for modeling relevant human visual performance data and evaluating candidate simulator configurations in terms of expected impact on the perceptual performance of the terrain-following pilot. Additional model applications are discussed, including interfacing with other human performance models and modeling other types of visually driven human task performance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.