Abstract

Motion parallax and binocular disparity contribute to the perceived depth of three-dimensional (3D) objects. However, depth is often misperceived, even when both cues are available. This may be due in part to conflicts with unmodelled cues endemic to computerized displays. Here we evaluated the impact of display-based cue conflicts on depth cue integration by comparing perceived depth for physical and virtual objects. Truncated square pyramids were rendered using Blender and 3D printed. We assessed perceived depth using a discrimination task with motion parallax, binocular disparity, and their combination. Physical stimuli were presented with precise control over position and lighting. Virtual stimuli were viewed using a head-mounted display. To generate motion parallax, observers made lateral head movements using a chin rest on a motion platform. Observers indicated if the width of the front face appeared greater or less than the distance between this surface and the base. We found that accuracy was similar for virtual and physical pyramids. All estimates were more precise when depth was defined by binocular disparity than motion parallax. Our probabilistic model shows that a linear combination model does not adequately describe performance in either physical or virtual conditions. While there was inter-observer variability in weights, performance in all conditions was best predicted by a veto model that excludes the less reliable depth cue, in this case motion parallax.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call