Abstract
Automatic emotion recognition based on multichannel electroencephalogram (EEG) data is a fundamental but challenging problem. Some previous researches ignore the correlation information of brain activity among the inter-channel and inter-frequency bands, which may provide potential information related to emotional states. In this work, we propose a 3-D feature construction method based on spatial-spectral information. First, power values per channel are arranged into a 2-D spatial feature representation according to the position of electrodes. Then, features from different frequency bands are arranged into a 3-D integration feature tensor to capture their complementary information. Simultaneously, we propose a novel framework based on feature fusion modules and dilated bottleneck-based convolutional neural networks (DBCN) which builds a more discriminative model to process the 3-D features for EEG emotion recognition. Both participant-dependent and participant-independent protocols are conducted to evaluate the performance of the proposed DBCN on the DEAP benchmark datasets. Mean 2-class classification accuracies of 89.67% / 90.93% (for participant-dependent) and 79.45% / 83.98% (for participant-independent) were respectively achieved for arousal / valence. These results suggest the proposed method based on the integration of spatial and spectral information could be extended to the assessment of mood disorder and human-computer interaction (HCI) applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.