Abstract

Due to the instability and complex distribution of electroencephalography (EEG) signals and the great cross-subject variations, exploiting valuable and discriminative emotional information from EEG is still a significant challenging issue. In this paper, we propose Bi-Stream Multilayer Perceptron – Self-Attention Mixer (BiSMSM), a novel model for EEG-based emotion recognition. The proposed model consists of two streams: the spatial stream and the temporal stream. BiSMSM jointly captures the useful information from temporal, spatial, local and global angles, aiming to encode more discriminative features describing emotions. The spatial stream focuses on the spatial information, while the temporal stream concentrates on the correlation information in the time domain. The structures of the two streams are similar, either of which contains a Multilayer Perceptron (MLP) based module to extract the regional in-channel and cross-channel information. The MLP-based module is followed by a self-attention mechanism module to explore the global signal correlations. Finally, the subject-independent experiments on the public benchmark datasets DEAP and DREAMER have demonstrated the advantage of our model over the related advanced approaches. Specifically, BiSMSM obtains an accuracy of 63.10% for valence classification and 61.89% for aoursal classification on DEAP, and 61.88% for valence classification and 64.25% for arousal classification on DREAMER.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call