Abstract

Selective attention differentially modulates neural responses to simultaneously presented speech streams. In this study, single-trial EEG classification was performed to identify the attended speech from a two-talker speech mixture. During EEG recordings, normal-hearing listeners paid attention to one speech stream while listening to speech mixtures. The target-to-masker ratios (TMRs) varied from −9 to + 9 dB. Individual speech streams were processed with head-related transfer functions to simulate different spatial locations. Two simulated spatial conditions (0 vs. + /−90 and + 45 vs. −45 degree azimuth) were tested for each TMR. Features related to (1) cross-correlation values between EEG signals and temporal envelope of each speech stream, or (2) correlation values between reconstructed speech from the EEG signals with the acoustic stimuli, were fed to the classifiers. The dimensionality of the feature vector was reduced using Principal Component Analysis. Linear Discriminant Analysis and Support Vector Machine were used to classify the EEG signals. Classifiers were trained and tested with a five-fold cross validation method on data pooled across TMRs and source locations for trial lengths from 50 s to 10 s. Average classification accuracy was 85% with a 50 s trial length and maintained high at 70% with a reduced trial length of 10 s.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.