Abstract

In sensory systems, representational features of increasing complexity emerge at successive stages of processing. In the mammalian auditory pathway, the clearest change from brainstem to cortex is defined by what is lost, not by what is gained, in that high-fidelity temporal coding becomes increasingly restricted to slower acoustic modulation rates.1,2 Here, we explore the idea that sluggish temporal processing is more than just an inability for fast processing, but instead reflects an emergent specialization for encoding sound features that unfold on very slow timescales.3,4 We performed simultaneous single unit ensemble recordings from three hierarchical stages of auditory processing in awake mice - the inferior colliculus (IC), medial geniculate body of the thalamus (MGB) and primary auditory cortex (A1). As expected, temporal coding of brief local intervals (0.001 - 0.1 s) separating consecutive noise bursts was robust in the IC and declined across MGB and A1. By contrast, slowly developing (∼1s period) global rhythmic patterns of inter-burst interval sequences strongly modulated A1 spiking, were weakly captured by MGB neurons, and not at all by IC neurons. Shifts in stimulus regularity were not represented by changes in A1 spike rates, but rather in how the spikes were arranged in time. These findings show that low-level auditory neurons with fast timescales encode isolated sound features but not the longer gestalt, while the extended timescales in higher-level areas can facilitate sensitivity to slower contextual changes in the sensory environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call