Abstract

EEG-based emotion classification is a vital aspect of human–machine interfaces. However, inter-subject variability poses a challenge for accurate domain-agnostic EEG emotion recognition, often requiring individual model calibration with a robust base model for fine-tuning. To overcome this limitation and develop a generalized model, we propose a Generalized Model based on Mutual Information for EEG Emotion Recognition without Adversarial Training (MI-EEG). The MI-EEG model leverages disentanglement to extract shared features, wherein it separates EEG features into domain-invariant class-relevant features and other features. To avoid adversarial training, mutual information minimization is applied during the decoupling process. Additionally, mutual information maximization is used to enrich the features by strengthening the relationship between domain-invariant class-relevant features and emotion labels. Furthermore, the transformer-based feature extractor, which utilizes a multi-headed attention mechanism and pooling operations, enhances the feature quality in the time dimension. The experimental evaluation on two emotional EEG datasets demonstrates the superior performance of the proposed EEG-MI model compared to existing state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call