Abstract

Electroencephalogram (EEG) signals, which objectively reflect the state of the brain, are widely favored in emotion recognition research. However, the presence of cross-session and cross-subject variation in EEG signals has hindered the practical implementation of EEG-based emotion recognition technologies. In this article, we propose a multi-source domain transfer method based on subdomain adaptation and minimum class confusion (MS-SAMCC) in response to the addressed issue. First, we introduce the mix-up data augmentation technique to generate augmented samples. Next, we propose a minimum class confusion subdomain adaptation method (MCCSA) as a sub-module of the multi-source domain adaptation module. This approach enables global alignment between each source domain and the target domain, while also achieving alignment among individual subdomains within them. Additionally, we employ minimum class confusion (MCC) as a regularizer for this sub-module. We performed experiments on SEED, SEED IV, and FACED datasets. In the cross-subject experiments, our method achieved mean classification accuracies of 87.14% on SEED, 63.24% on SEED IV, and 42.07% on FACED. In the cross-session experiments, our approach obtained average classification accuracies of 94.20% on SEED and 71.66% on SEED IV. These results demonstrate that the MS-SAMCC approach proposed in this study can effectively address EEG-based emotion recognition tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.