Abstract

Robust perception of heading involves integration of visual and non-visual (e.g., vestibular) cues. Area MSTd is thought to be involved in heading perception, as neurons in this area are sensitive to global patterns of optic flow as well as translation in darkness. To examine how visual and vestibular signals in MSTd contribute to heading perception, we recorded single-unit responses during a fine heading discrimination task. Heading direction was varied in small steps around straight forward in the horizontal plane. The task was performed in a virtual reality system and heading was defined in 3 ways: 1) inertial motion only (Vestibular condition); 2) optic flow only (Visual condition); and 3) congruent combination of inertial motion and optic flow (Combined condition). Stimuli were smooth motion trajectories with a Gaussian velocity profile. Psychophysical thresholds averaged ∼2° in the Vestibular condition. Thresholds in the Visual condition were well below 1° for coherent motion, but were adjusted to match vestibular thresholds by reducing motion coherence. The most sensitive MSTd neurons had thresholds close to behavior, but the average neuron was much less sensitive than the monkey in both single-cue conditions. In the Combined condition, psychophysical thresholds were significantly improved compared to the single-cue conditions. Thresholds for ‘congruent’ MSTd neurons with matched visual and vestibular tuning preferences were significantly improved under cue combination, whereas thresholds for neurons with opposite tuning preferences were not. We conclude that selective pooling of responses of ‘congruent’ MSTd neurons may contribute to cue integration for heading perception.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call