Abstract

The human visual system is foveated: we can see fine spatial details in central vision, whereas resolution is poor in our peripheral visual field, and this loss of resolution follows an approximately logarithmic decrease. Additionally, our brain organizes visual input in polar coordinates. Therefore, the image projection occurring between retina and primary visual cortex can be mathematically described by the log-polar transform. Here, we test and model how this space-variant visual processing affects how we process binocular disparity, a key component of human depth perception. We observe that the fovea preferentially processes disparities at fine spatial scales, whereas the visual periphery is tuned for coarse spatial scales, in line with the naturally occurring distributions of depths and disparities in the real-world. We further show that the visual system integrates disparity information across the visual field, in a near-optimal fashion. We develop a foveated, log-polar model that mimics the processing of depth information in primary visual cortex and that can process disparity directly in the cortical domain representation. This model takes real images as input and recreates the observed topography of human disparity sensitivity. Our findings support the notion that our foveated, binocular visual system has been moulded by the statistics of our visual environment.

Highlights

  • Humans employ binocular disparities, the differences between the views of the world seen by our two eyes, to determine the depth structure of the environment [1]

  • We investigate how humans perceive depth from binocular disparity at different spatial scales and across different regions of the visual field

  • We show that small changes in disparity-defined depth are detected best in central vision, whereas peripheral vision best captures the coarser structure of the environment

Read more

Summary

Introduction

The differences between the views of the world seen by our two eyes, to determine the depth structure of the environment [1]. Stereoscopic depth perception relies on relative disparities, i.e. the differences in disparities between points at different depths in the world, which are independent of fixation depth [2]. The fixated object will extend into our binocular visual field by a distance proportional to the object’s size, and over this area we will experience small stereoscopic depth changes, arising from relative retinal disparities due to the surface structure and slant or tilt of the fixated object. The world beyond the fixated object in our peripheral visual field will typically contain objects at a range of different depths. Using a variety of paradigms to investigate both absolute and relative disparity processing, several authors have provided evidence for at least two [7,8,9,10,11] or more [12] disparity spatial channels for disparity processing, which in turn may rely on distinct sets of luminance spatial channels [13,14,15,16]

Objectives
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call