Abstract

AbstractThe primate vision system exhibits numerous capabilities. Some important basic visual competencies include: 1) a consistent representation of visual space across eye movements; 2) egocentric spatial perception; 3) coordinated stereo fixation upon and pursuit of dynamic objects; and 4) attentional gaze deployment. We present a synthetic vision system that incorporates these competencies.We hypothesize that similarities between the underlying synthetic system model and that of the primate vision system elicit accordingly similar gaze behaviors. Psychophysical trials were conducted to record human gaze behavior when free-viewing a reproducible, dynamic, 3D scene. Identical trials were conducted with the synthetic system. A statistical comparison of synthetic and human gaze behavior has shown that the two are remarkably similar.

Highlights

  • 1.1 IntroductionVision is a data-rich sensing modality useful for environmental perception, navigation, search, hazard and novelty detection and communication

  • It enables continual alignment of the fovea with objects in the scene. It permits correction of retinal shifts induced by head perturbations within reflexive, rather than cognitive, timespans

  • Active foveal perception allows data reduction and high equivalent resolutions in observing a scene. It enables continual foveal alignment of objects in the scene

Read more

Summary

Introduction

1.1 IntroductionVision is a data-rich sensing modality useful for environmental perception, navigation, search, hazard and novelty detection and communication. Most depth mapping algorithms match pixel locations in separate camera views within a small disparity range, for example, ±32 pixels This means that depth maps obtained from static stereo configurations are often dense and well populated over portions of the scene around the fixed horopter, but they are not well suited to dynamic scenes or tasks that involve resolute depth estimation over larger scene volumes. For a binocular active head with independent left and right camera tilt axes, perspective changes due to independent tilt motions need to be accounted for ( independent tilt is not a primate-inspired ability, the algorithm we present can project images from independent tilt axes into a common, static, egocentric reference frame). The system has been designed in light of observations of the primate vision system It retains a short term memory of attended regions, such that they are not immediately reattended, and can be biased for basic visual tasks. We aim to compare timing statistics associated with the synthetic system observing a scene to that of human subjects observing the same scene

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call