Abstract
AbstractDue to the instability and complex distribution of electroencephalography (EEG) signals and the great cross-subject variations, extracting valuable and discriminative emotional information from EEG is still a significant challenge in EEG-based emotion recognition. In this paper, we proposed Bi-Stream MLP-SA Mixer (BiSMSM), a novel model for emotion recognition, which consists of two streams: the Spatial stream and the Temporal stream. The model captures signal information from four angles, from space to time, from local to global, aiming to encode more discriminative features describing emotions. The Spatial stream focuses on spatial information, while the Temporal stream concentrates on the correlation in the time domain. The structures of the two streams are similar, which both consist of an MLP-based module that extracts regional in-channel and cross-channel information. The module is followed by a global self-attention mechanism to focus on the global signal correlations. We conduct subject-independent experiments on the datasets DEAP and DREAMER to verify the performance of our model, whose results have excelled related methods. We obtained the average accuracy of 62.97\(\%\) for valence classification and 61.87\(\%\) for arousal classification on DEAP, and 60.87\(\%\) for valence and 63.28\(\%\) for arousal on DREAMER.KeywordsEmotion recognitionDeep learningSelf-attentionEEG signals
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.