The perception of speech, the recognition of words, and the understanding of spoken language involve the dynamic and interactive processing of cues provided by the incoming signal and information stored in memory. Even when accuracy is high, the relative contributions of bottom-up and top-down processes may explain variations in the speed and effort required when listening to speech. Listening is fast when the quality of the incoming signal is optimal, but it is slowed as signal quality is reduced. Likewise, listening can be speeded when expectations constrain the likely alternatives or when priming implicitly facilitates the recognition of the signal, whereas it can be slowed if the context is incongruent with the signal or if context is used to resolve ambiguities or repair misperceptions in a compensatory fashion. Within-subjects comparisons on off-line and on-line measures in different listening conditions, including simulations of auditory aging and hearing loss, are used to investigate how listening effort varies and how listening is speeded or slowed depending on signal-driven and knowledge-driven factors. Comparisons between younger and older participants are used to evaluate how long-standing reductions in auditory temporal processing and compensatory changes in brain organization may alter how signal-driven and knowledge-driven processes interact.
Read full abstract