Abstract

In the auditory streaming paradigm, alternating sequences of pure tones can be perceived as a single galloping rhythm (integration) or as two sequences with separated low and high tones (segregation). Although studied for decades, the neural mechanisms underlining this perceptual grouping of sound remains a mystery. With the aim of identifying a plausible minimal neural circuit that captures this phenomenon, we propose a firing rate model with two periodically forced neural populations coupled by fast direct excitation and slow delayed inhibition. By analyzing the model in a non-smooth, slow-fast regime we analytically prove the existence of a rich repertoire of dynamical states and of their parameter dependent transitions. We impose plausible parameter restrictions and link all states with perceptual interpretations. Regions of stimulus parameters occupied by states linked with each percept match those found in behavioural experiments. Our model suggests that slow inhibition masks the perception of subsequent tones during segregation (forward masking), whereas fast excitation enables integration for large pitch differences between the two tones.

Highlights

  • Understanding how our perceptual system encodes multiple objects simultaneously is an open challenge in sensory neuroscience

  • We proposed a link between states and the rhythms perceived during auditory streaming based on threshold crossing of the units’ responses: for ABAB integrated percepts, both units respond to every tone, and for segregated A-A- or -B-B percepts, each unit responds to only one tone

  • 10.4 Conclusions Our study proposed that sequences of tones are perceived as integrated or segregated through a combination of feature-based and temporal mechanisms

Read more

Summary

Introduction

Understanding how our perceptual system encodes multiple objects simultaneously is an open challenge in sensory neuroscience. We can separate out a voice of interest from other voices and ambient sound (cocktail party problem) [1, 2]. Primary auditory cortex (ACx) has a topographic map of sound frequency (tonotopy): a gradient of locations preferentially responding to frequencies from low to high [7, 8]. Feature separation alone cannot account for the auditory system segregating objects overlapping or interleaved in time (e.g. melodies, voices). Understanding the role of temporal neural mechanisms in perceptual segregation presents an interesting modelling challenge where the same neural populations represent different percepts through temporal encoding

Objectives
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.