Abstract

Objectives(1) To evaluate the recognition of words, phonemes and lexical tones in audiovisual (AV) and auditory-only (AO) modes in Mandarin-speaking adults with cochlear implants (CIs); (2) to understand the effect of presentation levels on AV speech perception; (3) to learn the effect of hearing experience on AV speech perception.MethodsThirteen deaf adults (age = 29.1±13.5 years; 8 male, 5 female) who had used CIs for >6 months and 10 normal-hearing (NH) adults participated in this study. Seven of them were prelingually deaf, and 6 postlingually deaf. The Mandarin Monosyllablic Word Recognition Test was used to assess recognition of words, phonemes and lexical tones in AV and AO conditions at 3 presentation levels: speech detection threshold (SDT), speech recognition threshold (SRT) and 10 dB SL (re:SRT).ResultsThe prelingual group had better phoneme recognition in the AV mode than in the AO mode at SDT and SRT (both p = 0.016), and so did the NH group at SDT (p = 0.004). Mode difference was not noted in the postlingual group. None of the groups had significantly different tone recognition in the 2 modes. The prelingual and postlingual groups had significantly better phoneme and tone recognition than the NH one at SDT in the AO mode (p = 0.016 and p = 0.002 for phonemes; p = 0.001 and p<0.001 for tones) but were outperformed by the NH group at 10 dB SL (re:SRT) in both modes (both p<0.001 for phonemes; p<0.001 and p = 0.002 for tones). The recognition scores had a significant correlation with group with age and sex controlled (p<0.001).ConclusionsVisual input may help prelingually deaf implantees to recognize phonemes but may not augment Mandarin tone recognition. The effect of presentation level seems minimal on CI users' AV perception. This indicates special considerations in developing audiological assessment protocols and rehabilitation strategies for implantees who speak tonal languages.

Highlights

  • Verbal information transmitted to listeners via dual-modal stimulation is often thought to be more efficient than uni-modal stimulation [1,2]

  • It was reported that deaf patients with cochlear implants (CIs) made use of visual information to supplement the auditory stimulation they received from the CIs and in this way optimized their speech perception in daily communication (e.g., [8,9,10])

  • Using Wilcoxon signed-rank tests with correction for multiple comparisons, the difference between the two modes was significant at speech detection threshold (SDT) and speech recognition threshold (SRT) in the prelingual group, and at SDT in the NH group (p = 0.004)

Read more

Summary

Introduction

Verbal information transmitted to listeners via dual-modal (i.e., audiovisual, AV) stimulation is often thought to be more efficient than uni-modal (auditory-only, AO) stimulation [1,2]. It was reported that deaf patients with cochlear implants (CIs) made use of visual information to supplement the auditory stimulation they received from the CIs and in this way optimized their speech perception in daily communication (e.g., [8,9,10]). Their speech recognition was significantly better in the AV condition than in the AO condition [10]. Higher AV gain was observed in the CI users than in the NH controls who were tested in the simulated or noise-masked conditions as a result of CI users’ greater capability to integrate visual information with degraded auditory signals [11]

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call