Abstract

Emotion recognition based on electroencephalogram (EEG) is a critical task in the field of affective brain-computer interfaces. However, due to the non-stationarity and individual variability of EEG, hand-designed features cannot adequately capture the nonlinear and high-dimensional properties of raw EEG. Currently, spatial–temporal models have been verified to effectively capture the spatial–temporal information for EEG. However, these models are characterized by complex structures, large parameter counts, and the need for extensive training data. To overcome these disadvantages, this paper propose an end-to-end spatial–temporal model for emotion recognition based on raw EEG, called attention-based convolutional closed-form continuous-time neural network (AC-CfC). This model employs a channel attention mechanism to weight and encode EEG, capturing channel dependencies. Subsequently, one-dimensional convolutional neural networks and closed-form continuous-time neural networks are used to extract deep spatial–temporal features. Additionally, an adaptive loss-controlling mechanism is designed to enhance the model's decision-making ability between classes that are easily confused. To verify the effectiveness of the proposed model, experiments are conducted on the DEAP and DREAMER datasets. The average accuracies of the proposed model reach to 94.76% and 93.01% for valence and arousal in subject-independent experiments on the DEAP dataset, with an improvement of 11.8% and 8.73% respectively compared with the state-of-art model ACRNN. On the DREAMER dataset, the average accuracies reach to 81.83%, 81.22%, and 80.63% for valence, arousal and dominance, with an improvement of 2.55%, 6.64%, and 6.98% respectively over ACRNN. These results show that our proposed model exhibits better performance than some state-of-art models in the same category.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call