Abstract

Objective.Extracting discriminative spatial information from multiple electrodes is a crucial and challenging problem for electroencephalogram (EEG)-based emotion recognition. Additionally, the domain shift caused by the individual differences degrades the performance of cross-subject EEG classification.Approach.To deal with the above problems, we propose the cerebral asymmetry representation learning-based deep subdomain adaptation network (CARL-DSAN) to enhance cross-subject EEG-based emotion recognition. Specifically, the CARL module is inspired by the neuroscience findings that asymmetrical activations of the left and right brain hemispheres occur during cognitive and affective processes. In the CARL module, we introduce a novel two-step strategy for extracting discriminative features through intra-hemisphere spatial learning and asymmetry representation learning. Moreover, the transformer encoders within the CARL module can emphasize the contributive electrodes and electrode pairs. Subsequently, the DSAN module, known for its superior performance over global domain adaptation, is adopted to mitigate domain shift and further improve the cross-subject performance by aligning relevant subdomains that share the same class samples.Main Results.To validate the effectiveness of the CARL-DSAN, we conduct subject-independent experiments on the DEAP database, achieving accuracies of 68.67% and 67.11% for arousal and valence classification, respectively, and corresponding accuracies of 67.70% and 67.18% on the MAHNOB-HCI database.Significance.The results demonstrate that CARL-DSAN can achieve an outstanding cross-subject performance in both arousal and valence classification.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.