Abstract

Event Abstract Back to Event Cortical processing of hierarchical linguistic structures in adverse auditory situations Maxime Niesen1*, Mathieu Bourguignon1, Marc Vander Ghinst1, Julie Bertels1, Vincent Wens1, Georges Choufani1, Sergio Hassid1, Serge Goldman1 and Xavier De Tiège1 1 Free University of Brussels, Belgium When listening to connected speech, the brain tracks speech temporal envelope, mainly at 0.5 Hz and 4-8 Hz [1]. As these two rates correspond respectively to prosodic stress/sentential rhythm and syllabic rhythm, it has been postulated that corresponding tracking subserves parsing and chucking of the incoming speech [2]. This hypothesis is supported by data, which demonstrated that, when prosodic cues are removed, neural activity integrates smaller linguistic units (i.e. syllables) into larger structures (i.e. phrases and sentences) based on syntax only [3]. In adverse listening situations, the listener’s brain tracks selectively the attended speech at syllabic and sentential rhythms [4,5]. However, it is still unknown how noise impinges on the brain processes involved in building these hierarchical linguistic structures based on syntax only. This study relies on a method adapted from Ding et al. [3] to understand how noisy environments can affect the cortical tracking of hierarchical linguistic structures. Neuromagnetic activity was recorded using whole-scalp magnetoencephalography (MEG, Triux, Elekta) from 20 healthy right-handed adults (mean age 24 years old, 11 females) in four conditions. In two conditions, subjects listened to 25 different blocks of more 32–40 French monosyllabic words that were either presented in random order (Scrambled, S) or in sentences sharing the same syntactic structure: pronoun + noun + verb + adjective/adverb (Meaningful, M). Words lasted 400 ms, so that word frequency (Fw) was 2.5 Hz and sentence frequency (Fs) 0.625 Hz. In two other conditions (Meaningfulnoise, Mn and Scramblednoise, Sn), a multitalker background noise at a signal-to-noise ratio of 0 dB was added. Sensor-level spectral analysis was used to identify peaks of power at Fs and Fw. MEG epochs were extracted from the 5th word onset to the 32nd word offset and artifact-free epochs were Fourier-transformed (frequency resolution 0.089 Hz). At each sensor, the amplitude was estimated via Fourier transformation, and retained only the Euclidian norm of the amplitude across pairs of planar gradiometers. For each subject and condition, SNR (signal-to-noise ratio) responses were computed as the ratio between the amplitude at each frequency bins and the average amplitude at the 10 surrounding frequency bins (5 on each side). Significance of the SNR at Fs , 2 × Fs, 3 × Fs, and Fw, and significance of the mean SNR across 1, 2, 3 × Fs was estimated with a non-parametric permutation test in which genuine SNR was compared to its permutation distribution derived from epochs in which word or sentence onsets were randomized across epochs (1000 permutations; significance threshold at p < 0.05 corrected for multiple comparisons across sensors). The SNR at harmonics of Fs were considered since brain responses are not expected to be purely sinusoidal [6]. In the same line, these SNRs were also combined since tracking at harmonics of Fs is expected to reflect the same processes as that at Fs [7]. We also compared statistically the peak of relative amplitude at Fs and harmonics between pairs of conditions (M vs. S; Mn vs. Sn; M vs. Mn) with a non-parametric permutation test in which subjects’ values in the two conditions were randomly swapped. Figure 1 displays the SNR spectra averaged across subjects and the corresponding topographic maps at Fw and its mean across 1, 2, and 3 × Fs (sentence frequency) in all conditions. A clear peak was disclosed at 2.5 Hz in all condition. Significant sensors covered bilateral temporal areas. Another clear peak was disclosed at Fs only for M and Mn, and also at 2 × Fs and 3 × Fs. Significant sensors covered bilateral temporal areas. The mean SNR across 1, 2, and 3 × Fs did not differ between M and Mn conditions suggesting that noise did not impact too strongly on the syntactic building of sentences. This study shows that syntax-driven tracking of sentences is robust to the presence of background noise in situations where prosodic cues are absent. This result contrasts with reports on connected speech where cortical tracking tends to decrease in similar cocktail-party noise level [4]. These results suggest that the impact of noise on speech-brain coupling is probably imputable to the loss of prosodic elements rather than of syntactic content per se, at least for reasonable SNR. Figure 1 Acknowledgements Maxime Niesen and Marc Vander Ghinst were supported by the Fonds Erasme (Brussels, Belgium). Mathieu Bourguignon and Julie Bertels are supported by the program Attract of Innoviris (grant 2015-BB2B-10). Mathieu Bourguignon is supported by the Marie Skłodowska-Curie Action of the European Commission (grant 743562). Xavier De Tiège is Postdoctoral Master Clinical Specialist at the Fonds de la Recherche Scientifique (FRS-FNRS, Brussels, Belgium).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call