Abstract

Natural sounds, including vocal communication sounds, contain critical information at multiple time scales. Two essential temporal modulation rates in speech have been argued to be in the low gamma band (∼20–80 ms duration information) and the theta band (∼150–300 ms), corresponding to segmental and diphonic versus syllabic modulation rates, respectively. It has been hypothesized that auditory cortex implements temporal integration using time constants closely related to these values. The neural correlates of a proposed dual temporal window mechanism in human auditory cortex remain poorly understood. We recorded MEG responses from participants listening to non-speech auditory stimuli with different temporal structures, created by concatenating frequency-modulated segments of varied segment durations. We show that such non-speech stimuli with temporal structure matching speech-relevant scales (∼25 and ∼200 ms) elicit reliable phase tracking in the corresponding associated oscillatory frequencies (low gamma and theta bands). In contrast, stimuli with non-matching temporal structure do not. Furthermore, the topography of theta band phase tracking shows rightward lateralization while gamma band phase tracking occurs bilaterally. The results support the hypothesis that there exists multi-time resolution processing in cortex on discontinuous scales and provide evidence for an asymmetric organization of temporal analysis (asymmetrical sampling in time, AST). The data argue for a mesoscopic-level neural mechanism underlying multi-time resolution processing: the sliding and resetting of intrinsic temporal windows on privileged time scales.

Highlights

  • Mapping from input sounds to stored representations involves the temporal analysis and integration of information on distinct – and perhaps even nonoverlapping – timescales (Poeppel, 2003; Hickok and Poeppel, 2007; Poeppel et al, 2008)

  • We hypothesize that the two putative cortical temporal integration windows are neurally manifested in the phase pattern of the corresponding cortical rhythms

  • A phase tracking mechanism might be closely related to the two intrinsic temporal windows and would be difficult to elicit at other oscillation frequencies

Read more

Summary

Introduction

Mapping from input sounds (such as speech) to stored representations (such as words) involves the temporal analysis and integration of information on distinct – and perhaps even nonoverlapping – timescales (Poeppel, 2003; Hickok and Poeppel, 2007; Poeppel et al, 2008). Multi-time resolution hypotheses of different types have been proposed to resolve the tension between information carried on different scales concurrently (Greenberg and Ainsworth, 2006; Giraud and Poeppel, 2012a,b). A second strand of research, somewhat more recent in its origin, has focused on the temporal properties of speech signals. Even a cursory glance at the acoustics of speech – whether as a waveform or as a spectrographic representation – reveals that different types of information appear to be carried on different timescales (for a review, see Rosen, 1992). The neural mechanisms for such multi-time resolution processing in human auditory cortex (and some possible hemispheric asymmetries) have been a focus of much recent work

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.