Abstract
Listening in noisy or complex sound environments is difficult for individuals with normal hearing and can be a debilitating impairment for those with hearing loss. Extracting meaningful information from a complex acoustic environment requires the ability to accurately encode specific sound features under highly variable listening conditions and segregate distinct sound streams from multiple overlapping sources. The auditory system employs a variety of mechanisms to achieve this auditory scene analysis. First, neurons across levels of the auditory system exhibit compensatory adaptations to their gain and dynamic range in response to prevailing sound stimulus statistics in the environment. These adaptations allow for robust representations of sound features that are to a large degree invariant to the level of background noise. Second, listeners can selectively attend to a desired sound target in an environment with multiple sound sources. This selective auditory attention is another form of sensory gain control, enhancing the representation of an attended sound source while suppressing responses to unattended sounds. This review will examine both “bottom-up” gain alterations in response to changes in environmental sound statistics as well as “top-down” mechanisms that allow for selective extraction of specific sound features in a complex auditory scene. Finally, we will discuss how hearing loss interacts with these gain control mechanisms, and the adaptive and/or maladaptive perceptual consequences of this plasticity.
Highlights
Auditory scene analysis— the ability to segregate specific sound features from multiple overlapping sources— is essential for extracting meaningful information from a complex sound environment (Bregman, 1990)
While much more work is needed to elucidate the relationship between hearing loss, cognitive decline, and auditory scene analysis, current evidence suggests that hearing impairments that arise with age are likely the combined effect of disruptions to bottom-up sound processing and top-down auditory attentional regulation
The auditory system employs a variety of adaptive coding strategies (Figures 1, 2) to navigate this cacophonous environment, including: compensatory dynamic range and gain adaptations to incoming stimulus statistics in order to build level and contrast invariant tuning of sound features under different background conditions (Rabinowitz et al, 2013); adaptive spatial tuning for localizing and focusing on specific sound sources to aid in the segregation of auditory streams in the presence of complex sound environment (Reed et al, 2020); and top-down attentional mechanisms that modulate auditory response and receptive field properties to selectively amplify behaviorally relevant sound features (Fritz et al, 2005b)
Summary
Auditory scene analysis— the ability to segregate specific sound features from multiple overlapping sources— is essential for extracting meaningful information from a complex sound environment (Bregman, 1990). In addition to adapting to their own stimulus history, auditory neurons can modify their response properties to match the statistics of the entire distribution of sounds encountered in the environment Auditory neurons adapt their dynamic range and gain in response to a variety of stimulus statistics (Figure 1), including: mean sound level (Dean et al, 2005; Wen et al, 2009; Barbour, 2011), sound level variance or contrast (Nagel and Doupe, 2006; Rabinowitz et al, 2011; Willmore et al, 2014), interaural sound cues (Dahmen et al, 2010; Stange et al, 2013), and spectral-temporal correlations (Kvale and Schreiner, 2004; Natan et al, 2016; Homma et al, 2020). We will discuss evidence for different forms of stimulus statistic adaptation as well as our current understanding of the neurophysiological mechanism and perceptual consequences of these adaptations
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have