Abstract

How the human brain retains relevant vocal information while suppressing irrelevant sounds is one of the ongoing challenges in cognitive neuroscience. Knowledge of the underlying mechanisms of this ability can be used to identify whether a person is distracted during listening to a target speech, especially in a learning context. This paper investigates the neural correlates of learning from the speech presented in a noisy environment using an ecologically valid learning context and electroencephalography (EEG). To this end, the following listening tasks were performed while 64-channel EEG signals were recorded: (1) attentive listening to the lectures in background sound, (2) attentive listening to the background sound presented alone, and (3) inattentive listening to the background sound. For the first task, 13 lectures of 5 min in length embedded in different types of realistic background noise were presented to participants who were asked to focus on the lectures. As background noise, multi-talker babble, continuous highway, and fluctuating traffic sounds were used. After the second task, a written exam was taken to quantify the amount of information that participants have acquired and retained from the lectures. In addition to various power spectrum-based EEG features in different frequency bands, the peak frequency and long-range temporal correlations (LRTC) of alpha-band activity were estimated. To reduce these dimensions, a principal component analysis (PCA) was applied to the different listening conditions resulting in the feature combinations that discriminate most between listening conditions and persons. Linear mixed-effect modeling was used to explain the origin of extracted principal components, showing their dependence on listening condition and type of background sound. Following this unsupervised step, a supervised analysis was performed to explain the link between the exam results and the EEG principal component scores using both linear fixed and mixed-effect modeling. Results suggest that the ability to learn from the speech presented in environmental noise can be predicted by the several components over the specific brain regions better than by knowing the background noise type. These components were linked to deterioration in attention, speech envelope following, decreased focusing during listening, cognitive prediction error, and specific inhibition mechanisms.

Highlights

  • The human brain is remarkably capable of focusing on one specific sound while suppressing all others (Alain, 2007)

  • The current study showed that it is possible to predict beyond the chance level the amount of vocal information that participants acquire and retain from the lectures presented in different environmental sounds using 64-channel EEG

  • Five principal component scores of the EEG features obtained under different listening conditions and for different persons were essential for this prediction

Read more

Summary

Introduction

The human brain is remarkably capable of focusing on one specific sound while suppressing all others (Alain, 2007). The ability to acquire and retain vocal information strongly affects the overall learning performance. This is even more challenging when this occurs in the presence of environmental noise. Lower triangular arrows are directed toward the background noises which have higher (better) exam z-scores. A good model needs to fit data well—it needs to be parsimonious This criterion takes the model objects as arguments and returns an ANOVA testing whether or not the more complex model is significantly better at capturing the data than the simpler mode. AContributing PC scores (PCSs): p < 10−4 → Parietal PCS 7 (−0.86); p < 10−3 → Central PCS1 (0.63); p < 0.01 → Occipital PCS 1 (−0.51), Occipital PCS 2 (−0.33), Occipital PCS 7 (0.50), Occipital PCS 9 (−0.32), Frontal PCS 4 (−0.33), Central PCS 5 (0.57), Central PCS 8 (−0.41), Left Temporal PCS 4 (0.52); p < 0.05 → Occipital PCS 4 (−0.25), Frontal PCS 3 (−0.16), Parietal PCS 1 (0.42), Parietal PCS 5 (−0.40), Parietal PCS 8 (0.25), Left Temporal PCS 2 (0.65), Left Temporal PCS 1 (−0.53), Left Temporal PCS 6 (0.26); p < 0.2 → Frontal PCS 6 (−0.14), Frontal PCS 7 (0.14), Left Temporal PCS 3 (0.12), Left Temporal PCS 5 (−0.18), Right Temporal PCS 2 (−0.42).

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call