Abstract

Stereoscopic, or “3D” vision in humans is mediated by neurons sensitive to the disparities in the positions of objects in the two eyes’ views. A disparity-sensitive neuron is typically characterized by its responses to left- and right-eye monocular signals, SL and SR, respectively. However, it can alternatively be characterized by sensitivity to the sum of the two eyes’ inputs, S+ = SL + SR, and the difference, S− = SL − SR. Li and Atick’s theory of efficient binocular encoding proposes that the S+ and S− signals can be separately weighted to maximize the efficiency with which binocular information is encoded. This adaptation changes each neuron’s sensitivity and preferred binocular disparity, resulting in predicted effects on the perceived stereoscopic depth of objects. To test these predictions, we measured the apparent depth of a random-dot stereogram with an ‘in-front’ target following adaptation to binocularly correlated or anti-correlated horizontally-oriented grating stimuli, which reduce sensitivity to the S+ and S− signals, respectively, but which contain no conventional stereo-depth signals. The anti-correlated noise adaptation made the target appear relatively closer to the background than the correlated noise adaptation, with differences of up to 60%. We show how this finding can be accommodated by a standard model of binocular disparity processing, modified to incorporate the binocular adaptation suggested by Li and Atick’s theory.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call