Abstract
Electroencephalography (EEG) emotion recognition is an important task for brain–computer interfaces. The time, frequency, and spatial domains of EEG signals have been widely studied. However, these methods often ignore the spatial and temporal correlations in dual modules, resulting in insufficient emotional representations. In this paper, a dual module EEG emotion recognition method based on an improved capsule network and residual Long-Short Term Memory (ResLSTM) is proposed. Using an improved capsule network as the spatial module is more advantageous in learning specific EEG spatial representations. The ResLSTM of the temporal module inherits the information flow from the upper spatial module and conducts complementary learning of the spatiotemporal dual module features through residual connections, thus obtaining more discriminative EEG features and ultimately boosting the classification capabilities of the model. The average accuracy of arousal, valence, and dominance on the DEAP dataset reached 98.06%, 97.94%, and 98.15%, respectively. The DREAMER dataset’s average accuracy of arousal, valence, and dominance reached 94.97%, 94.71%, and 94.96%, respectively. The results of our experiments indicate that our method outperforms state-of-the-art approaches.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have