Abstract

Profile analysis tests the ability to discriminate sounds based on patterns in amplitude spectra. Prior work has largely interpreted profile-analysis data using the power-spectrum model of masking. Under this model, performance relies on analyzing the output of a peripheral bandpass filterbank, and thresholds reflect limits on performance due to frequency selectivity and neural noise. Although this model successfully captures some basic trends in profile-analysis data, it has difficulty explaining others, such as poorer performance at high frequencies. We hypothesize that these trends can instead be explained by midbrain sensitivity to neural fluctuations. Profile-analysis stimuli contain rich temporal modulations, which elicit fluctuations in neural responses that are shaped by the auditory periphery and encoded by average discharge rates in the midbrain. We used physiologically realistic models to simulate midbrain responses to profile-analysis stimuli over a wide range of frequencies, sound levels, and component numbers/spacing. Some features of profile analysis that are difficult to explain with the power-spectrum model, such as frequency dependence and the effects of hearing loss, were readily accounted for by midbrain tuning to neural fluctuations. These results inform the role of fluctuations and effects of hearing loss on discrimination of complex sounds. [Work supported by NIH R01 DC010813.]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.