Abstract

Since Electroencephalogram (EEG) is resistant to camouflage, it has been a reliable data source for objective emotion recognition. EEG is naturally multi-rhythm and multi-channel, based on which we can extract multiple features for further processing. In EEG-based emotion recognition, it is important to investigate whether there exist some common features shared by different emotional states, and the specific features associated with each emotional state. However, such fundamental problem is ignored by most of the existing studies. To this end, we propose a Joint label-Common and label-Specific Features Exploration (JCSFE) model for semi-supervised cross-session EEG emotion recognition in this paper. To be specific, JCSFE imposes the ℓ2,1-norm on the projection matrix to explore the label-common EEG features and simultaneously the ℓ1-norm is used to explore the label-specific EEG features. Besides, a graph regularization term is introduced to enforce the data local invariance property, i.e., similar EEG samples are encouraged to have the same emotional state. Results obtained from the SEED-IV and SEED-V emotional data sets experimentally demonstrate that JCSFE not only achieves superior emotion recognition performance in comparison with the state-of-the-art models but also provides us with a quantitative method to identify the label-common and label-specific EEG features in emotion recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call