Abstract

Vowel identification performance can be predicted reasonably well for normal-hearing (NH) listeners based on principal components derived from excitation patterns [M. R. Molis, J. Acoust. Soc. Am. 111, 2433–2434 (2002)]. In this study, vowel categorization was measured in listeners with mild-to-moderate hearing loss. Stimuli were 54 synthesized vowels that varied orthogonally in F2 (1081–2120 Hz) and F3 (1268–2783 Hz) frequency in equal 0.8-bark steps. Fundamental frequency contour, F1 (455 Hz), F4 (3250 Hz), F5 (3700 Hz), and duration (225 ms) were held constant. Hearing-impaired (HI) listeners categorized the stimuli as the vowels /I/, /U/, or /ɝ/. Estimates of frequency resolution were also obtained, and excitation patterns were constructed for each listener. Categorization performance was more variable for HI listeners relative to NH listeners, both within and between listeners. Many subjects appeared to rely solely on F2 frequency and had particular difficulty with the /U/ vs /ɝ/ distinction. Excitation patterns suggested a rather imprecise internal spectral representation of the stimuli. HI response patterns may reflect decreased audibility, spectral smearing of formant structure due to poor frequency resolution, or both. Models of vowel perception based on the impaired excitation patterns will be compared with formant-based models. [Work supported by NIH (DC00626).]

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call