Abstract

Deciding what constitutes an object, and what background, is an essential task for the visual system. This presents a conundrum: averaging over the visual scene is required to obtain a precise signal for object segregation, but segregation is required to define the region over which averaging should take place. Depth, obtained via binocular disparity (the differences between two eyes’ views), could help with segregation by enabling identification of object and background via differences in depth. Here, we explore depth perception in disparity-defined objects. We show that a simple object segregation rule, followed by averaging over that segregated area, can account for depth estimation errors. To do this, we compared objects with smoothly varying depth edges to those with sharp depth edges, and found that perceived peak depth was reduced for the former. A computational model used a rule based on object shape to segregate and average over a central portion of the object, and was able to emulate the reduction in perceived depth. We also demonstrated that the segregated area is not predefined but is dependent on the object shape. We discuss how this segregation strategy could be employed by animals seeking to deter binocular predators.This article is part of the themed issue ‘Vision in our three-dimensional world’.

Highlights

  • Binocular disparity, the tiny differences between right and left eye views of a scene, can be used to segregate an object from its background even without other visual information about the boundary between object and background

  • Julesz used random dot stereogram (RDS) to suggest that binocular vision alone can break camouflage, as disparity reveals the three-dimensional shape of an object even when the object has identical patterning to the background

  • There were large differences between participants, but each participant showed consistent thresholds for all conditions to within 0.2 arcmin. These results suggest that the reduction in perceived peak depth for the smoother objects is not related to the presence of half occlusions (HOs)

Read more

Summary

Introduction

The tiny differences between right and left eye views of a scene, can be used to segregate an object from its background even without other visual information about the boundary between object and background. Disparity extraction is thought to rely on a process akin to local cross-correlation, where individual disparitysensitive neurons signal a single disparity over a spatial region—their receptive field [11,13 –18]. Models of this process can explain a variety of effects, including why some transparent scenes are perceived as a single plane rather than a pair of (or more) planes at different depths [19,20,21,22].

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.