Abstract

In previous work (May & Zhaoping, 2016; May, Zhaoping, & Hibbard, 2012), we have provided evidence that the visual system efficiently encodes binocular information using separately adaptable binocular summation and differencing channels. In that work, binocular test stimuli delivered different grating patterns to the two binocular channels; selective adaptation of one of the binocular channels made participants more likely to see the other channel's grating pattern. In the current study, we extend this paradigm to face perception. Our test stimuli delivered different face images to the two binocular channels, and we found that selective adaptation of one binocular channel biased the observer to perceive the other channel's face image. We show that the perceived identity, gender, emotional expression, or direction of 3-D rotation of a facial test image can be influenced by pre-exposure to binocular random-noise patterns that contain no meaningful spatial structure. Our results provide compelling evidence that face-processing mechanisms can inherit adaptation from low-level sites. Our adaptation paradigm targets the low-level mechanisms in such a way that any response bias or inadvertent adaptation of high-level mechanisms selective for face categories would reduce, rather than produce, the measured effects of adaptation.

Highlights

  • Li and Atick (1994) proposed that the two eyes’ signals are coded efficiently in the brain using binocular summation and differencing channels very early in the processing stream

  • We presented binocular test stimuli that delivered different face images to the binocular summation and differencing channels, and found that selective adaptation of one binocular channel would bias perception toward the other channel’s face image

  • As argued in the Introduction, the site of adaptation cannot possibly be the high-level cortical mechanisms that process faces. These adaptation effects must result from inheritance of adaptation from earlier stages of processing

Read more

Summary

Introduction

Li and Atick (1994) proposed that the two eyes’ signals are coded efficiently in the brain using binocular summation and differencing channels very early in the processing stream (see Figure 1). We have previously reported psychophysical evidence for these selectively adaptable summation and differencing channels (May & Zhaoping, 2016; May, Zhaoping, & Hibbard, 2012). In the binocular summation channel, the visual input is the sum of the input to the two eyes, (A þ B) þ (A À B), so the B components cancel out, leaving A; in the binocular differencing channel, the A components cancel out, leaving B. We found that by selectively adapting one or the other of the binocular channels using binocular random-noise stimuli, we could bias perception toward A or B. In the study reported here, we extend this paradigm to human faces, showing that perception of identity, gender, emotional expression, or direction of 3-d head rotation can be influenced by pre-exposure to binocular random-noise patterns

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call