Abstract
Recognition of spoken words in noise and in quiet is more accurate for Lexically Easy words (high frequency words with few similar-sounding neighbors) than for Lexically Hard words (low frequency words with many similar-sounding neighbors). Using monosyllables, the present set of two experiments extends this finding to a perceptually interesting class of stimuli and test formats. In both open- and closed-sets, listeners attempted to identify amplitude-modulated and bandpass-filtered words [Shannon, R., Zeng, F., Kamath, V., Wgonski, J., Ekelid, M., 1995. Speech recognition with primarily temporal cues. Science 270, 303–304]––noise-band speech––shown to simulate the performance of cochlear implant (CI) patients using the same number of frequency channels. The words were synthesized from a database that controls for Lexical Difficulty, Talker Identity and Talker Gender. Word recognition was significantly more accurate for Easy words in both the open- and the closed-set experiments. These results indicate that, even when spoken word recognition is challenged by noise-band speech, the Easy–Hard effect survives the perceptually uncertain conditions of word variability. Consequences for models of spoken word recognition are explored.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have