Abstract
The electroencephalogram (EEG) is widely used to identify different emotional states. However, cross-subject emotion recognition based on EEG data is challenging due to physiological differences among participants. Many studies have proposed using multi-source domain adaptation methods to address cross-subject issues and have achieved promising results. multi-source domain adaptation involves constructing independent branches for EEG data from multiple subjects, using one-to-one domain adaptation, and extracting domain-specific features, followed by reasoning through multiple branches. However, a current practical obstacle is the increasing computational time cost as the number of source domains grows. Moreover, too many source domains may lead to domain shift issues. To address these challenges, we propose a novel emotion recognition method based on Multi-Source Domain Branch Self-Selected Joint Domain Adaptation (MSS-JDA). Initially, we construct a shared feature extractor for multiple source domain branches to extract common low-level features. Next, we use a one-to-one associated joint adaptation method to extract features specific to each domain. During training, our method of multi-source domain branch self-selection, guided by prior knowledge in the early training phase, prunes source domain branches that deviate significantly from the target domain, retaining only a certain number of source domain branches close to the target domain for continued training. Finally, domain-specific features are input into their respective classifiers to obtain N predictions, and their average is taken as the final result. Extensive experiments on the SEED and SEED-IV datasets demonstrate that the MSS-JDA model achieves accuracy and standard deviation scores of 93.78%, 3.39 and 78.93%, 7.39 respectively, surpassing most existing models. The experiments indicate that the proposed MSS-JDA method improves the accuracy of EEG emotion recognition classification, thus benefiting practical applications in EEG classification.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.