In the field of brain-computer interface, automatic recognition of emotions based on electroencephalogram (EEG) signals is of great significance. At present, deep learning can deeply mine the information in the data, especially the convolutional neural network (CNN) and the recurrent neural network (LSTM), which has remarkably improved the accuracy in numerous fields, hence it is applied by researchers to EEG-based emotional identification research. Nonetheless, existing CNN and LSTM-based models still rely on data preprocessing and feature extraction. Furthermore, CNNs have limitations in perceiving global dependencies while LSTMs have problems such as vanishing gradients in the case of long sequences. In this paper, we proposed the Spatiotemporal Symmetric Transformer Model (STS-Transformer), an effective EEG emotion recognition model, to overcome the current problems. STS-Transformer is an end-to-end framework that directly recognizes emotion from raw EEG signals without data preprocessing and feature extraction. This method achieves 89.86% and 86.83% accuracy respectively in the binary classification of valence and arousal in the DEAP dataset; in the DREAMER dataset, the binary classification accuracy of valence and arousal is 85.09% and 82.32%. Therefore, our method exhibits remarkable advantages over other end-to-end models in similar studies.
Read full abstract