Abstract

The effects of spectral modification on speech recognition were investigated for sensorineural listeners: One group with flat audiometric configuration and a second group with sharply‐sloping high‐frequency hearing loss. Three spectral shapes were tested: Uniform frequency gain, high‐pass filtering, and a response shaped relative to loudness discomfort levels. Performance‐intensity functions were measured at four levels (from 80–95 dB SPL) using the CUNY Nonsense Syllable Test (NST) and the Synthetic Sentence Identification task (SSI), both presented monaurally under earphones against a background of multitalker babble. No significant differences in performance on the NST were observed between the two subject groups across all spectral shapes and presentation levels. On the SSI, performance of subjects with flat audiometric configuration was highest using the uniform frequency response, while performance of listeners with sloping hearing loss was poorest for the uniform spectral shape. The recognition data were compared with predictions of relative performance using a modification of the Articulation Index. The AI provided accurate estimates of relative performance across spectral shapes, but were not consistent with relative performance as a function of presentation level. [Work supported by NIH.]

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.