Abstract

Objective. Due to individual differences in electroencephalogram (EEG) signals, the learning model built by the subject-dependent technique from one person’s data would be inaccurate when applied to another person for emotion recognition. Thus, the subject-dependent approach for emotion recognition may result in poor generalization performance when compared to the subject-independent approach. However, existing studies have attempted but have not fully utilized EEG’s topology, nor have they solved the problem caused by the difference in data distribution between the source and target domains. Approach. To eliminate individual differences in EEG signals, this paper proposes the domain adversarial graph attention model, a novel EEG-based emotion recognition model. The basic idea is to generate a graph using biological topology to model multichannel EEG signals. Graph theory can topologically describe and analyze EEG channel relationships and mutual dependencies. Then, unlike other graph convolutional networks, self-attention pooling is used to benefit from the extraction of salient EEG features from the graph, effectively improving performance. Finally, following graph pooling, the domain adversarial model based on the graph is used to identify and handle EEG variation across subjects, achieving good generalizability efficiently. Main Results. We conduct extensive evaluations on two benchmark data sets (SEED and SEED IV) and obtain cutting-edge results in subject-independent emotion recognition. Our model boosts the SEED accuracy to 92.59% (4.06% improvement) with the lowest standard deviation (STD) of 3.21% (2.46% decrements) and SEED IV accuracy to 80.74% (6.90% improvement) with the lowest STD of 4.14% (3.88% decrements), respectively. The computational complexity is drastically reduced in comparison to similar efforts (33 times lower). Significance. We have developed a model that significantly reduces the computation time while maintaining accuracy, making EEG-based emotion decoding more practical and generalizable.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.