Abstract

The perception of sound involves a complex array of attributes and processes, ranging from the sensation of timber and pitch, to the localization and fusion of sound sources. Computational strategies proposed to describe these phenomena have emphasised temporal features in the representation of sound in the auditory system. This is in contrast to visual processing where spatial features, such as edges and peaks, play a critical role in defining the image. These divergent views of auditory and visual processing have led to the conclusion that the underlying neural networks must be quite different. Recent experimental findings from the peripheral and central auditory system, however, reveal intricate spatiotemporal neural response patterns and a multitude of spatial cues that can encode the acoustic stimulus. These results suggest a unified computational framework, and hence shared neural network architectures, for central auditory and visual processing. Specifically, we demonstrate how three fundamental concepts in visual processing play an analogous role in auditory processing and perception. These are: lateral inhibition for sound spectral estimation, edge orientation and direction of motion sensitivity for timbre perception, and stereopsis for binaural processing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.