Abstract

The perception of consonants has been investigated in various studies and shown to critically depend on fine details in the stimuli. The present study proposes a microscopic speech perception model that combines an auditory processing front end with a correlation-based template-matching back end to predict consonant recognition and confusions. The model represents an extension of the auditory signal processing model by Dau et al. [(1997), J. Acoust. Soc. Am. 102, 2892-2905] toward predicting microscopic speech perception data. Model predictions were computed for the extensive consonant perception data set provided by Zaar and Dau [(2015), J. Acoust. Soc. Am. 138, 1253-1267], obtained with consonant-vowels (CVs) in white noise. The predictions were in good agreement with the perceptual data both in terms of consonant recognition and confusions. The model was further evaluated with respect to perceptual artifacts induced by (i) different hearing-aid signal processing strategies and (ii) simulated cochlear-implant processing, based on data from DiNino et al. [(2016), J. Acoust. Soc. Am., 140, 4404-4418]. The model successfully predicted the strong consonant confusions measured in these conditions. Overall, the results suggest that the proposed model may provide a valuable framework for assessing acoustic transmission channels and hearing-instrument signal processing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.