Luminance contrast is one of the key factors in the visibility of objects in the world around us. Previous work has shown that the perceived depth from binocular disparity depends profoundly on the luminance contrast of the image. This dependence cannot be explained by existing disparity models, such as the well-established disparity energy model, because they predict no effect of luminance contrast on depth perception. Here, we develop a model for disparity processing that incorporates contrast normalization of the neural response into the disparity energy model to account for the contrast dependence of perceived depth from disparity. Our model contains an array of disparity channels, each with a different disparity selectivity. The binocular images are first processed by the left- and right-eye receptive fields of each channel. The outputs of the two receptive fields are combined linearly as the excitatory disparity sensitivity and then fed into a nonlinear contrast gain control mechanism. The perceived depth is determined by the weighted average of all the disparity channels that respond to the binocular images. This model provides the first analytic account of how luminance contrast affects perceived depth from disparity.
Read full abstract