Abstract

How are binocular disparities encoded and represented in the human visual system? An 'encoding cube' diagram is introduced to visualise differences between competing models. To distinguish the models experimentally, the depth-increment-detection function (discriminating disparity d from d +/- delta d) was measured as a function of standing disparity (d) with spatially filtered random-dot stereograms of different centre spatial frequencies. Stereothresholds degraded more quickly as standing disparity was increased with stimuli defined by high rather than low centre spatial frequency. This is consistent with a close correlation between the spatial scale of detection mechanisms and the disparities they process. It is shown that a simple model, where discrimination is limited by the noisy ratio of outputs of three disparity-selective mechanisms at each spatial scale, can account for the data. It is not necessary to invoke a population code for disparity to model the depth-increment-detection function. This type of encoding scheme implies insensitivity to large interocular phase differences. Might the system have developed a strategy to disambiguate or shift the matches made at fine scales with those made at the coarse scales at large standing disparities? In agreement with Rohaly and Wilson, no evidence was found that this is so. Such a scheme would predict that stereothresholds determined with targets composed of compounds of high and low frequency should be superior to those of either component alone. Although a small stereoacuity benefit was found at small disparities, the more striking result was that stereothresholds for compound-frequency targets were actually degraded at large standing disparities. The results argue against neural shifting of the matching range of fine scales by coarse-scale matches posited by certain stereo models.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.