Abstract
Due to the non-invasiveness and high precision of electroencephalography (EEG), the combination of EEG and artificial intelligence (AI) is often used for emotion recognition. However, the internal differences in EEG data have become an obstacle to classification accuracy. To solve this problem, considering labeled data from similar nature but different domains, domain adaptation usually provides an attractive option. Most of the existing researches aggregate the EEG data from different subjects and sessions as a source domain, which ignores the assumption that the source has a certain marginal distribution. Moreover, existing methods often only align the representation distributions extracted from a single structure, and may only contain partial information. Therefore, we propose the multi-source and multi-representation adaptation (MSMRA) for cross-domain EEG emotion recognition, which divides the EEG data from different subjects and sessions into multiple domains and aligns the distribution of multiple representations extracted from a hybrid structure. Two datasets, i.e., SEED and SEED IV, are used to validate the proposed method in cross-session and cross-subject transfer scenarios, experimental results demonstrate the superior performance of our model to state-of-the-art models in most settings.
Highlights
Studies have shown that EEG played an important role in the research of human emotion and that regional brain activity was closely related to some emotional states (Niemic, 2004)
In the Cross-subject scenario, our method is 2% lower than Multisource Marginal Distribution Adaptation (MS-MDA), it is better than all other methods, which shows that our method is competitive
On the SEED-IV dataset, it can be clearly seen that our proposed method significantly exceeds all other competitors, and each has improved by at least 11 and 10% compared to others in Cross-session and Crosssubject scenarios, it is a piece of exciting news
Summary
Emotion is a physiological state of humans, which appears when people are stimulated by external or their own factors. Many scholars have carried out researches on emotion recognition using non-physiological signals such as gestures, facial expressions, eye expressions, and voice. Some scholars (Camurri et al, 2003; Durupinar et al, 2016; Senecal et al, 2016) try to identify emotions from dance movements. These methods are limited to specific dance moves and lack practical significance. Compared with non-physiological signals, physiological signals [such as blood pressure (BVP), electroencephalogram (EEG), electrooculogram (EOG), electrocardiography (ECG), electromyogram (EMG), etc.] are spontaneously generated by the human body and can truly reflect the emotional state of humans, which has high reliability. EEG, which has the characteristics of nonsubjectivity, real-time difference, and rich information, has been widely used in the field of emotion recognition. Studies have shown that EEG played an important role in the research of human emotion and that regional brain activity was closely related to some emotional states (Niemic, 2004)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.