Abstract

A critical issue in assessing speech recognition involves understanding the factors that cause listeners to make errors. Models like the articulation index show that average error decreases logarithmically with increases in signal-to-noise ratio (SNR). The authors investigated (a) whether this log-linear relationship holds across consonants and for individual tokens and (b) what accounts for differences in error rates at the across- and within-consonant levels. Listeners with normal hearing heard CV syllables (16 consonants and 4 vowels) spoken by 14 talkers, presented at 6 SNRs. Stimuli were presented randomly, and listeners indicated which syllable they heard. The log-linear relationship between error and SNR holds across consonants but breaks down at the token level. These 2 sources of variability (across- and within-consonant factors) explain the majority of listeners' errors. Moreover, simply adjusting for differences in token-level error thresholds explains 62% of the variability in listeners' responses. These results demonstrate that speech tests must control for the large variability among tokens, not average across them, as is commonly done in clinical practice. Accounting for token-level differences in error thresholds with listeners with normal hearing provides a basis for tests designed to diagnostically evaluate individual differences with listeners with hearing impairment.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.