Abstract

The cochlear implant (CI) allows profoundly deaf individuals to partially recover hearing. Still, due to the coarse acoustic information provided by the implant, CI users have considerable difficulties in recognizing speech, especially in noisy environments. CI users therefore rely heavily on visual cues to augment speech recognition, more so than normal-hearing individuals. However, it is unknown how attention to one (focused) or both (divided) modalities plays a role in multisensory speech recognition. Here we show that unisensory speech listening and reading were negatively impacted in divided-attention tasks for CI users—but not for normal-hearing individuals. Our psychophysical experiments revealed that, as expected, listening thresholds were consistently better for the normal-hearing, while lipreading thresholds were largely similar for the two groups. Moreover, audiovisual speech recognition for normal-hearing individuals could be described well by probabilistic summation of auditory and visual speech recognition, while CI users were better integrators than expected from statistical facilitation alone. Our results suggest that this benefit in integration comes at a cost. Unisensory speech recognition is degraded for CI users when attention needs to be divided across modalities. We conjecture that CI users exhibit an integration-attention trade-off. They focus solely on a single modality during focused-attention tasks, but need to divide their limited attentional resources in situations with uncertainty about the upcoming stimulus modality. We argue that in order to determine the benefit of a CI for speech recognition, situational factors need to be discounted by presenting speech in realistic or complex audiovisual environments.

Highlights

  • We have focused on a group-level analysis, even though it is well-known that performance levels can vary widely across individuals, both for normal-hearing lip-reading abilities and for speech-listening abilities of cochlear implant (CI) users

  • Note that this is in line with our observations that only the auditory lapse probability significantly changed across tasks for the CI users, and that visual performance remained the same

  • Normal-hearing participants can attend extensively on auditory and visual cues, while CI users need to divide their attentional resources across modalities to improve multisensory speech recognition–even though this leads to a degradation in unisensory speech recognition

Read more

Summary

INTRODUCTION

The effects of audiovisual integration are clearly evident from goal-directed behavior and include behavioral benefits, such as shorter reaction times (Corneil et al, 2002; Bremen et al, 2017; Colonius and Diederich, 2017), increased localization accuracy and precision (Corneil et al, 2002; Alais and Burr, 2004), and reduced ambiguity (McDonald et al, 2000) These behavioral effects are typically reflected by enhanced neuronal activity (Stein and Meredith, 1993; van de Rijt et al, 2016; Colonius and Diederich, 2017). We compared word-recognition performance during focused and divided attention tasks users and normal-hearing individuals, by presenting unisensory and/or bi-sensory spoken sentences in different sensory-noise regimes. It remains unclear from the literature whether CI users can successfully divide their attention across modalities, and whether divided attention affects their speechrecognition abilities

Participants
RESULTS
Summary
ETHICS STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call