Abstract

How to extract discriminative latent feature representations from electroencephalography (EEG) signals and build a generalized model is a topic in EEG-based emotion recognition research. This study proposed a novel emotion recognition model based on deep latent feature fusion of EEG signals and multi-task learning, referred to as MTLFuseNet. MTLFuseNet learned spatio-temporal latent features of EEG in an unsupervised manner by a variational autoencoder (VAE) and learned the spatio-spectral features of EEG in a supervised manner by a graph convolutional network (GCN) and gated recurrent unit (GRU) network. Afterward, the two latent features were fused to form more complementary and discriminative spatio-temporal–spectral fusion features for EEG signal representation. In addition, MTLFuseNet was constructed based on multi-task learning. The focal loss was introduced to solve the problem of unbalanced sample classes in an emotional dataset, and the triplet-center loss was introduced to make the fused latent feature vectors more discriminative. Finally, a subject-independent leave-one-subject-out cross-validation strategy was used to validate extensively on two public datasets, DEAP and DREAMER. On the DEAP dataset, the average accuracies of valence and arousal are 71.33% and 73.28%, respectively. On the DREAMER dataset, the average accuracies of valence and arousal are 80.43% and 83.33%, respectively. The experimental results show that the proposed MTLFuseNet model achieves excellent recognition performance, outperforming the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call