Abstract

To gain knowledge of listening effort in adverse situations, it is important to know how the brain processes speech with different signal-to-noise ratios (SNR). To investigate this, we conducted a study with 33 hearing impaired individuals, whose electroencephalographic (EEG) signals were recorded while listening to sentences presented in high and low levels of background noise. To discriminate between these two conditions, features from the 64-channel EEG recordings were extracted using the power spectrum obtained by a Fast Fourier Transform. Features vectors were selected on an individual basis by using the statistical R2 approach. The selected features were then classified by a Support Vector Machine with a nonlinear kernel, and the classification results were validated using a leave-one-out strategy, and presented an average classification accuracy over all 33 subjects of 83% (SD=6.4%). The most discriminative features were selected in the high-beta (19-30 Hz) and gamma (30-45 Hz) bands. These results suggest that specific brain oscillations are involved in addressing background noise during speech stimuli, which may reflect differences in cognitive load between the conditions of low and high background noise.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call