Abstract

Neural oscillations in auditory cortex are argued to support parsing and representing speech constituents at their corresponding temporal scales. Yet, how incoming sensory information interacts with ongoing spontaneous brain activity, what features of the neuronal microcircuitry underlie spontaneous and stimulus-evoked spectral fingerprints, and what these fingerprints entail for stimulus encoding, remain largely open questions. We used a combination of human invasive electrophysiology, computational modeling and decoding techniques to assess the information encoding properties of brain activity and to relate them to a plausible underlying neuronal microarchitecture. We analyzed intracortical auditory EEG activity from 10 patients while they were listening to short sentences. Pre-stimulus neural activity in early auditory cortical regions often exhibited power spectra with a shoulder in the delta range and a small bump in the beta range. Speech decreased power in the beta range, and increased power in the delta-theta and gamma ranges. Using multivariate machine learning techniques, we assessed the spectral profile of information content for two aspects of speech processing: detection and discrimination. We obtained better phase than power information decoding, and a bimodal spectral profile of information content with better decoding at low (delta-theta) and high (gamma) frequencies than at intermediate (beta) frequencies. These experimental data were reproduced by a simple rate model made of two subnetworks with different timescales, each composed of coupled excitatory and inhibitory units, and connected via a negative feedback loop. Modeling and experimental results were similar in terms of pre-stimulus spectral profile (except for the iEEG beta bump), spectral modulations with speech, and spectral profile of information content. Altogether, we provide converging evidence from both univariate spectral analysis and decoding approaches for a dual timescale processing infrastructure in human auditory cortex, and show that it is consistent with the dynamics of a simple rate model.

Highlights

  • The brains of humans and other animals generate electrical activity that often exhibits rhythmic patterns, which are apparent as shoulders or small bumps in the power spectrum on top of the 1/fα profile (Buzsaki, 2006; Buzsaki and Draguhn, 2004)

  • Most selected channels from early auditory cortex exhibited trajectories with marked and reproducible deflections in correspondence to certain acoustic landmarks, most prominently sentence onset and speech tokens that follow a pause (Fig. 1B). Another distinctive feature was the presence of a power spectral peak or a shoulder in the beta range, which decreased in amplitude with speech presentation

  • We addressed whether the neural dynamics reflected in intracranial electroencephalography (iEEG) data could be reproduced by a simple model that assumes that theta and gamma-scale neural activity is underpinned by two interconnected subnetworks, each producing pseudo-rhythmic behavior at distinct timescales

Read more

Summary

Introduction

The brains of humans and other animals generate electrical activity that often exhibits rhythmic patterns, which are apparent as shoulders or small bumps in the power spectrum on top of the 1/fα profile (Buzsaki, 2006; Buzsaki and Draguhn, 2004). Alterations of spectral features observed in clinical populations have been related to microscopic anomalies in interneuronal function (Gonzalez-Burgos and Lewis, 2008; Pizzarelli and Cherubini, 2011) and/or in the local balance and coordination between synaptic excitation and inhibition (Fenton, 2015; Gao and Penzes, 2015). Macroscopic features related to rhythmic brain activity could reflect microscopic anomalies at the neuronal level and, at least in some cases, be related to specific sets of susceptibility genes (Ramamoorthi and Lin, 2011; Gao and Penzes, 2015; Benıtez-Burraco and Murphy, 2016), further enhancing their interest for both basic and clinical research. Multiple pieces of evidence suggest that auditory perception, and its associated brain activity, is not a scale-free process, but presents at least two separated frequency bands, approximately located near the classically defined delta-theta (1-8 Hz) and gamma (30-60 Hz) bands, where perception and brain entrainment surpass those observed in intermediate frequencies (Poeppel, 2003; Boemio et al, 2005; Luo and Poeppel, 2012; Edwards and Chang, 2013; Ross et al, 2014; Teng et al, 2016; Teng et al, 2017)

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call