Abstract

We used a cochlear implant simulation (noise-vocoded speech) to investigate speech recognition and perceptual learning in normal-hearing adult speakers of English. In two separate sessions (1-2 weeks apart), 28 listeners were tested on recognition of noise-vocoded Sentences, Words, and isolated segments (Consonants and Vowels). There was evidence of significant perceptual learning that survived until Session 2 for all tasks. An individual differences analysis of Session 1 data suggested two independently-varying 'levels' of processing at work in the initial perception of the distorted speech stimuli - a 'top-down' listening mode making use of contextual and lexical information, and a 'bottom-up' mode focussed on acoustic-phonetic discriminations. By Session 2, a more generalised listening mode emerged, reflecting listeners' consolidation of basic sound-to-representation mappings. Further exploration of Consonant and Vowel confusion data (using Information Transfer analyses) suggested that better speech recognition performance may be achieved through more efficient use of the preserved cues to duration and voicing in noise-vocoded stimuli, but that listeners failed to take full advantage of such information. We conclude that training regimes involving directed attention to specific features, such as vowel length, may help to improve performance with noise-vocoded speech.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.