Abstract

We present a neurocomputational model for auditory streaming, which is a prominent phenomenon of auditory scene analysis. The proposed model represents auditory scene analysis by oscillatory correlation, where a perceptual stream corresponds to a synchronized assembly of neural oscillators and different streams correspond to desynchronized oscillator assemblies. The underlying neural architecture is a two-dimensional network of relaxation oscillators with lateral excitation and global inhibition, where one dimension represents time and another dimension frequency. By employing dynamic connections along the frequency dimension and a random element in global inhibition, the proposed model produces a temporal coherence boundary and a fissure boundary that closely match those from the psychophysical data of auditory streaming. Several issues are discussed, including how to represent physical time and how to relate shifting synchronization to auditory attention.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.