Abstract
Speech recognition in noise improves with combined acoustic and electric hearing compared to electric hearing alone. Here, we investigate the contribution of four low-frequency hearing cues (fundamental frequency (F0), first formant (F1), voicing, and glimpsing) to speech recognition in combined hearing. Normal-hearing listeners heard vocoded speech in one ear and low-pass (LP) filtered speech in the other. Three conditions (vocode alone, LP alone, and combined) were tested. Target speech with an average F0 of 120 Hz was mixed with a time-reversed masker sentence (average F0=172 Hz) at three SNRs (5,10,15 dB). The LP speech aided performance at all three SNRs. F1 cues were then removed by replacing the LP speech with an LP equal-amplitude harmonic complex with the same F0 contour as the target speech, and an amplitude that was modulated by that of the target’s voiced portions. The benefits of combined hearing disappeared at 10- and 15-dB SNR, but persisted at SNR=5 dB. The same happened when, additionally, F0 cues were removed by fixing the F0 of the harmonic complex at 150 Hz. The results are consistent with a role for the F1 cue and of voicing and/or glimpsing cues, but not with a combination of F0 information between the two ears.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have