Abstract

How complex natural sounds are represented by the main converging center of the auditory midbrain, the central inferior colliculus, is an open question. We applied neural discrimination to determine the variation of detailed encoding of individual vocalizations across the best frequency gradient of the central inferior colliculus. The analysis was based on collective responses from several neurons. These multi-unit spike trains were recorded from guinea pigs exposed to a spectrotemporally rich set of eleven species-specific vocalizations. Spike trains of disparate units from the same recording were combined in order to investigate whether groups of multi-unit clusters represent the whole set of vocalizations more reliably than only one unit, and whether temporal response correlations between them facilitate an unambiguous neural representation of the vocalizations. We found a spatial distribution of the capability to accurately encode groups of vocalizations across the best frequency gradient. Different vocalizations are optimally discriminated at different locations of the best frequency gradient. Furthermore, groups of a few multi-unit clusters yield improved discrimination over only one multi-unit cluster between all tested vocalizations. However, temporal response correlations between units do not yield better discrimination. Our study is based on a large set of units of simultaneously recorded responses from several guinea pigs and electrode insertion positions. Our findings suggest a broadly distributed code for behaviorally relevant vocalizations in the mammalian inferior colliculus. Responses from a few non-interacting units are sufficient to faithfully represent the whole set of studied vocalizations with diverse spectrotemporal properties.

Highlights

  • Vocalizations are spectrotemporally varying sounds which display a wide spectrum of acoustic properties, such as amplitude and frequency modulations, harmonics and temporal correlations

  • We combined spike train responses from several multi-units to investigate whether groups of multi-unit clusters result in a better neural discrimination than one multi-unit cluster, and whether temporal response correlations between the multi-unit clusters contribute to an even better separability

  • We found that vocalizations in the mammalian inferior colliculus are encoded spatially across the best frequency gradient of the inferior colliculus

Read more

Summary

Introduction

Vocalizations are spectrotemporally varying sounds which display a wide spectrum of acoustic properties, such as amplitude and frequency modulations, harmonics and temporal correlations. These natural sounds are well suited for studying the auditory system, since it was suggested that neurons are adapted to process them (Rieke et al, 1995). We address the question how the inferior colliculus (IC) of these mammals encodes species-specific vocalizations. The central nucleus of the inferior colliculus (ICC) is essential for extracting time-varying spectrotemporal information (Escabí and Schreiner, 2002) and might be important for processing complex sounds such as speech and vocalizations

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call