The brain combines information from the two eyes during vision. This combination is obligatory to a remarkable extent: In random-dot kinematograms (RDKs), randomly moving noise dots were similarly effective at preventing observers from seeing the motion of coherently moving signals dots, independent of whether the signal and noise were presented to the same eye or segregated to different eyes. However, motion detectors have varied binocularity: Neurons in visual brain area V1 that encode high contrast, high speed stimuli may be less completely binocular than neurons that encode low contrast, low speed stimuli. Also, neurons in MT often have unbalanced inputs from the two eyes. We predicted that for high contrast, high speed stimuli only, there would be a benefit to segregating the signal and noise of the RDK into different eyes. We found this benefit, both when performance was measured by percent coherence thresholds and when it was measured by luminance contrast ratio (signal-dot-contrast to noise-dot-contrast) thresholds. Thus, for high contrast, high speed stimuli, binocular fusion of local motion is not complete before the extraction of global motion. We also replicated a cross-over interaction: At high speed, global motion extraction was generally more efficient when dot contrast was high, but at low speed it was more efficient when dot contrast was low. We provide a schematic model of binocular global motion perception, to show how the contrast-speed interaction can be predicted from neurophysiology and why it should be exaggerated for segregated viewing. Our data bore out these predictions. We conclude that different neural populations limit performance during binocular global motion perception, depending on stimulus contrast and speed.
Read full abstract