Abstract
Our brains are expert at processing multi-sensory inputs. We automatically “fuse” sounds and sights that come from one source-- to the point that we cannot turn off cross-sensory perceptual interactions. Examples like the McGurk effect (in speech perception), the “Flash-Beep illusion” (in temporal perception), and the “Ventriloquism Effect” (in spatial perception) demonstrate how robust and obligatory cross-sensory integration can be. Indeed, these sensory “illusions” are not truly “illusions,” but rather examples of how perceptually integrated sensory inputs are in the real world, compared to how we view them in the laboratory. Such cross-modal effects also underscore differences in the kind of information that vision and audition “specialize” in, and what our brain trusts. Specifically, visual inputs convey spatial information precisely, but temporal information poorly. Conversely, auditory inputs convey temporal information precisely, but spatial information poorly. Recent cognitive neuroscience work from our lab gives insight into how the brain handles sensory specialization and cross-sensory coding, revealing distinct cortical networks that support visuo-spatial and auditory-temporal processing. These results will be reviewed, along with their implications for multi-sensory coding in films, video games, and other forms of multimedia.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.