Abstract

Electroencephalogram (EEG) emotion recognition suffers from cross-domain identification difficulties mainly caused by cross-subject and cross-session differences. Inspired by domain adaptation, the study constructs a novel multi-source domain adaptation with a Spatio-temporal feature extractor (MSDA-SFE) for EEG emotion recognition, which could reduce the signal differences between subjects and between sessions to make the target domain features align with the source domain features easily. In common feature learning, a Spatiotemporal feature extraction module is constructed to acquire domain-invariant characteristics of the source and target domains in which one subject is selected randomly as the target domain, and the remaining subjects are source domains. Whereafter, the obtained target domain features are paired with other source domain features respectively to generate N-1 pairs of concatenated features. And then these features are translated into N-1 domain-specific EEG features through N-1 parallel branches, where the target domain features are better aligned in the latent space, and the classification boundary of target samples is obtained by minimizing the disparity between branches. Finally, domain-specific features are fed to respective classifiers to get N-1 predictions, and their mean is put as the final result. Extensive experiments are implemented on SEED, SEED-IV, and DEAP benchmark databases. And, results indicate that the MSDA-SFE network outperforms the other comparison methods. Besides, the additional experiment also illustrates that the method has a powerful generalization capacity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call