Abstract

Thus far, two possible roles of temporal fine structure (TFS) have been suggested for speech recognition. A first role is to provide acoustic speech information. A second role is to assist in identifying which auditory channels are dominated by the target signal so that the output of these channels can be combined at a later stage to reconstruct the internal representation of that target. Our most recent work has been largely in contradiction with the speech-information hypothesis, as we generally observe that normal-hearing (NH) listeners do not rely on the TFS of the target speech signal to obtain speech information. However, direct evidence that NH listeners do rely on the TFS to extract the target speech signal from the background is still lacking. The present study was designed to provide such evidence. A dual-carrier vocoder was implemented to assess the role of TFS cues in streaming. To our knowledge, this is the only strategy allowing TFS cues to be provided without transmitting speech information. Results showed that NH listeners can achieve sentence recognition scores comparable to that obtain with the original TFS (i.e., unprocessed), suggesting a primary role of TFS cues in streaming. Implications for cochlear implants are discussed.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call