Abstract

Electroencephalogram (EEG) contains emotion information, but usually undergoes severe signal variations. In the literature, for the issue of EEG-based emotion recognition, not only have diverse features been designed to capture the representative characteristics of EEG, but also various domain adaptation techniques have been devised to reduce the distributional discrepancy between the source and target domains. However, existing domain adaptation techniques for this issue treat all the source samples equally, but ignore their transferability difference, which more or less limits the performance of learned domain-invariant and class-discriminative EEG features for emotion recognition. To cope with this problem, we propose Transposition Multi-Layer Perceptron (TMLP) and Sample-Reweighted Domain Adaptation Neural Network (SRDANN) in one whole learning framework. TMLP extracts the robust multi-channel EEG features related to emotions; SRDANN transfers the discriminative knowledge from the source domain to the target domain for classifying emotions. By concentrating on the source samples with stronger transferability in domain adaptation, TMLP+SRDANN can learn the more domain-invariant and class-discriminative features from EEG, thus providing a more effective solution to emotion recognition in the challenging cross-subject scenario. The subject-independent emotion recognition experiments on two benchmark datasets have demonstrated the effectiveness of our method: an accuracy of 81.04% for emotion classification on SEED, 61.88% for valence classification and 57.70% for arousal classification on DEAP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call