Abstract

Objects such as trees, shrubs, and tall grass consist of thousands of small surfaces that are distributed over a three-dimensional (3D) volume. To perceive the depth of surfaces within 3D clutter, a visual system can use binocular stereo and motion parallax. However, such parallax cues are less reliable in 3D clutter because surfaces tend to be partly occluded. Occlusions provide depth information, but it is unknown whether visual systems use occlusion cues to aid depth perception in 3D clutter, as previous studies have addressed occlusions for simple scene geometries only. Here, we present a set of depth discrimination experiments that examine depth from occlusion cues in 3D clutter, and how these cues interact with stereo and motion parallax. We identify two probabilistic occlusion cues. The first is based on the fraction of an object that is visible. The second is based on the depth range of the occluders. We show that human observers use both of these occlusion cues. We also define ideal observers that are based on these occlusion cues. Human observer performance is close to ideal using the visibility cue but far from ideal using the range cue. A key reason for the latter is that the range cue depends on depth estimation of the clutter itself which is unreliable. Our results provide new fundamental constraints on the depth information that is available from occlusions in 3D clutter, and how the occlusion cues are combined with binocular stereo and motion parallax cues.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call