Abstract

SummaryHow and where in the brain audio-visual signals are bound to create multimodal objects remains unknown. One hypothesis is that temporal coherence between dynamic multisensory signals provides a mechanism for binding stimulus features across sensory modalities. Here, we report that when the luminance of a visual stimulus is temporally coherent with the amplitude fluctuations of one sound in a mixture, the representation of that sound is enhanced in auditory cortex. Critically, this enhancement extends to include both binding and non-binding features of the sound. We demonstrate that visual information conveyed from visual cortex via the phase of the local field potential is combined with auditory information within auditory cortex. These data provide evidence that early cross-sensory binding provides a bottom-up mechanism for the formation of cross-sensory objects and that one role for multisensory binding in auditory cortex is to support auditory scene analysis.

Highlights

  • When listening to a sound of interest, we frequently look at the source

  • One possibility is that the integration of cross-sensory information in early sensory cortex provides a bottom-up substrate for the binding of multisensory stimulus features into a single perceptual object (Bizley et al, 2016b)

  • In a subset of these animals, we were able to reversibly silence visual cortex during recording, in order to determine the origin of visual-stimulus elicited neural changes

Read more

Summary

Introduction

When listening to a sound of interest, we frequently look at the source. how auditory and visual information are integrated to form a coherent perceptual object is unknown. The temporal properties of a visual stimulus can be exploited to detect correspondence between auditory and visual streams (Crosse et al, 2015; Denison et al, 2013; Rahne et al, 2008), can bias the perceptual organization of a sound scene (Brosch et al, 2015), and can enhance or impair listening performance depending on whether the visual stimulus is temporally coherent with a target or distractor sound stream (Maddox et al, 2015) Together, these behavioral results suggest that temporal coherence between auditory and visual stimuli can promote binding of cross-modal features to enable the formation of an auditory-visual (AV) object (Bizley et al, 2016b). In order to demonstrate binding, an appropriate crossmodal stimulus should elicit enhanced neural encoding of the stimulus features that bind auditory and visual streams (the ‘‘binding features’’), but there should be enhancement in the representation of other stimulus features (‘‘non-binding features’’ associated with the source (Figure 1C)

Methods
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.