ACCNet: Adaptive cross-frequency coupling graph attention for EEG emotion recognition.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ACCNet: Adaptive cross-frequency coupling graph attention for EEG emotion recognition.

Similar Papers
  • Research Article
  • Cite Count Icon 1
  • 10.1038/s41598-025-98623-7
SS-EMERGE - self-supervised enhancement for multidimension emotion recognition using GNNs for EEG
  • Apr 24, 2025
  • Scientific Reports
  • Chirag Ahuja + 1 more

Self-supervised learning (SSL) is a potent method for leveraging unlabelled data. Nonetheless, EEG signals, characterised by their low signal-to-noise ratio and high-frequency attributes, often do not surpass fully-supervised techniques in cross-subject tasks such as Emotion Recognition. Therefore, this study introduces a hybrid SSL framework: Self-Supervised Enhancement for Multidimension Emotion Recognition using Graph Neural Networks (SS-EMERGE). This model enhances cross-subject EEG-based emotion recognition by incorporating Causal Convolutions for temporal feature extraction, Graph Attention Transformers (GAT) for spatial modelling, and Spectral Embedding for spectral domain analysis. The approach utilises meiosis-based contrastive learning for pretraining, followed by fine-tuning with minimal labelled data, thereby enriching dataset diversity and specificity. Evaluations on the widely-used Emotion recognition datasets, SEED and SEED-IV, reveal that SS-EMERGE achieves impressive Leave-One-Subject-Out (LOSO) accuracies of 92.35% and 81.51%, respectively. It also proposes a foundation model pre-trained on combined SEED and SEED-IV datasets, demonstrating performance comparable to individual models. These results emphasise the potential of SS-EMERGE in advancing EEG-based emotion recognition with high accuracy and minimal labelled data.

  • Research Article
  • Cite Count Icon 505
  • 10.1109/taffc.2020.2994159
EEG-Based Emotion Recognition Using Regularized Graph Neural Networks
  • May 5, 2020
  • IEEE Transactions on Affective Computing
  • Peixiang Zhong + 2 more

Electroencephalography (EEG) measures the neuronal activities in different brain regions via electrodes. Many existing studies on EEG-based emotion recognition do not fully exploit the topology of EEG channels. In this article, we propose a regularized graph neural network (RGNN) for EEG-based emotion recognition. RGNN considers the biological topology among different brain regions to capture both local and global relations among different EEG channels. Specifically, we model the inter-channel relations in EEG signals via an adjacency matrix in a graph neural network where the connection and sparseness of the adjacency matrix are inspired by neuroscience theories of human brain organization. In addition, we propose two regularizers, namely node-wise domain adversarial training (NodeDAT) and emotion-aware distribution learning (EmotionDL), to better handle cross-subject EEG variations and noisy labels, respectively. Extensive experiments on two public datasets, SEED, and SEED-IV, demonstrate the superior performance of our model than state-of-the-art models in most experimental settings. Moreover, ablation studies show that the proposed adjacency matrix and two regularizers contribute consistent and significant gain to the performance of our RGNN model. Finally, investigations on the neuronal activities reveal important brain regions and inter-channel relations for EEG-based emotion recognition.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 14
  • 10.3390/electronics11040651
Cross-Day EEG-Based Emotion Recognition Using Transfer Component Analysis
  • Feb 19, 2022
  • Electronics
  • Zhongyang He + 4 more

EEG-based emotion recognition can help achieve more natural human-computer interaction, but the temporal non-stationarity of EEG signals affects the robustness of EEG-based emotion recognition models. Most existing studies use the emotional EEG data collected in the same trial to train and test models, once this kind of model is applied to the data collected at different times of the same subject, its recognition accuracy will decrease significantly. To address the problem of EEG-based cross-day emotion recognition, this paper has constructed a database of emotional EEG signals collected over six days for each subject using the Chinese Affective Video System and self-built video library stimuli materials, and the database is the largest number of days collected for a single subject so far. To study the neural patterns of emotions based on EEG signals cross-day, the brain topography has been analyzed in this paper, which show there is a stable neural pattern of emotions cross-day. Then, Transfer Component Analysis (TCA) algorithm is used to adaptively determine the optimal dimensionality of the TCA transformation and match domains of the best correlated motion features in multiple time domains by using EEG signals from different time (days). The experimental results show that the TCA-based domain adaptation strategy can effectively improve the accuracy of cross-day emotion recognition by 3.55% and 2.34%, respectively, in the classification of joy-sadness and joy-anger emotions. The emotion recognition model and brain topography in this paper, verify that the database can provide a reliable data basis for emotion recognition across different time domains. This EEG database will be open to more researchers to promote the practical application of emotion recognition.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 54
  • 10.3390/app10051619
EEG-Based Emotion Recognition Using Logistic Regression with Gaussian Kernel and Laplacian Prior and Investigation of Critical Frequency Bands
  • Feb 29, 2020
  • Applied Sciences
  • Chao Pan + 4 more

Emotion plays a nuclear part in human attention, decision-making, and communication. Electroencephalogram (EEG)-based emotion recognition has developed a lot due to the application of Brain-Computer Interface (BCI) and its effectiveness compared to body expressions and other physiological signals. Despite significant progress in affective computing, emotion recognition is still an unexplored problem. This paper introduced Logistic Regression (LR) with Gaussian kernel and Laplacian prior for EEG-based emotion recognition. The Gaussian kernel enhances the EEG data separability in the transformed space. The Laplacian prior promotes the sparsity of learned LR regressors to avoid over-specification. The LR regressors are optimized using the logistic regression via variable splitting and augmented Lagrangian (LORSAL) algorithm. For simplicity, the introduced method is noted as LORSAL. Experiments were conducted on the dataset for emotion analysis using EEG, physiological and video signals (DEAP). Various spectral features and features by combining electrodes (power spectral density (PSD), differential entropy (DE), differential asymmetry (DASM), rational asymmetry (RASM), and differential caudality (DCAU)) were extracted from different frequency bands (Delta, Theta, Alpha, Beta, Gamma, and Total) with EEG signals. The Naive Bayes (NB), support vector machine (SVM), linear LR with L1-regularization (LR_L1), linear LR with L2-regularization (LR_L2) were used for comparison in the binary emotion classification for valence and arousal. LORSAL obtained the best classification accuracies (77.17% and 77.03% for valence and arousal, respectively) on the DE features extracted from total frequency bands. This paper also investigates the critical frequency bands in emotion recognition. The experimental results showed the superiority of Gamma and Beta bands in classifying emotions. It was presented that DE was the most informative and DASM and DCAU had lower computational complexity with relatively ideal accuracies. An analysis of LORSAL and the recently deep learning (DL) methods is included in the discussion. Conclusions and future work are presented in the final section.

  • Research Article
  • Cite Count Icon 13
  • 10.7717/peerj-cs.2065
A comprehensive review of deep learning in EEG-based emotion recognition: classifications, trends, and practical implications
  • May 23, 2024
  • PeerJ Computer Science
  • Weizhi Ma + 5 more

Emotion recognition utilizing EEG signals has emerged as a pivotal component of human–computer interaction. In recent years, with the relentless advancement of deep learning techniques, using deep learning for analyzing EEG signals has assumed a prominent role in emotion recognition. Applying deep learning in the context of EEG-based emotion recognition carries profound practical implications. Although many model approaches and some review articles have scrutinized this domain, they have yet to undergo a comprehensive and precise classification and summarization process. The existing classifications are somewhat coarse, with insufficient attention given to the potential applications within this domain. Therefore, this article systematically classifies recent developments in EEG-based emotion recognition, providing researchers with a lucid understanding of this field’s various trajectories and methodologies. Additionally, it elucidates why distinct directions necessitate distinct modeling approaches. In conclusion, this article synthesizes and dissects the practical significance of EEG signals in emotion recognition, emphasizing its promising avenues for future application.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 3
  • 10.3389/fphys.2023.1200656
CDBA: a novel multi-branch feature fusion model for EEG-based emotion recognition.
  • Jul 20, 2023
  • Frontiers in Physiology
  • Zhentao Huang + 8 more

EEG-based emotion recognition through artificial intelligence is one of the major areas of biomedical and machine learning, which plays a key role in understanding brain activity and developing decision-making systems. However, the traditional EEG-based emotion recognition is a single feature input mode, which cannot obtain multiple feature information, and cannot meet the requirements of intelligent and high real-time brain computer interface. And because the EEG signal is nonlinear, the traditional methods of time domain or frequency domain are not suitable. In this paper, a CNN-DSC-Bi-LSTM-Attention (CDBA) model based on EEG signals for automatic emotion recognition is presented, which contains three feature-extracted channels. The normalized EEG signals are used as an input, the feature of which is extracted by multi-branching and then concatenated, and each channel feature weight is assigned through the attention mechanism layer. Finally, Softmax was used to classify EEG signals. To evaluate the performance of the proposed CDBA model, experiments were performed on SEED and DREAMER datasets, separately. The validation experimental results show that the proposed CDBA model is effective in classifying EEG emotions. For triple-category (positive, neutral and negative) and four-category (happiness, sadness, fear and neutrality), the classification accuracies were respectively 99.44% and 99.99% on SEED datasets. For five classification (Valence 1-Valence 5) on DREAMER datasets, the accuracy is 84.49%. To further verify and evaluate the model accuracy and credibility, the multi-classification experiments based on ten-fold cross-validation were conducted, the elevation indexes of which are all higher than other models. The results show that the multi-branch feature fusion deep learning model based on attention mechanism has strong fitting and generalization ability and can solve nonlinear modeling problems, so it is an effective emotion recognition method. Therefore, it is helpful to the diagnosis and treatment of nervous system diseases, and it is expected to be applied to emotion-based brain computer interface systems.

  • Research Article
  • 10.1109/embc53108.2024.10782334
EEG-based Emotion Recognition using Graph Attention Network with Dual-Branch Attention Module.
  • Jul 15, 2024
  • Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
  • Cheng Li + 3 more

EEG reveals human brain activities for emotion and becomes an important aspect of affective computing. In this study, we developed a novel approach, namely DAM-GAT, which incorporated a dual-branch attention module (DAM) into a graph attention network (GAT) for EEG-based emotion recognition. This method used the GAT to capture the local features of emotional EEG signals. To enhance the important EEG features for emotion recognition, the proposed method also included a DAM that calculated weights considering both channel and frequency information. Additionally, the relationship between EEG channels was determined using the phase-locking value (PLV) connectivity of corresponding EEG signals. Based on the SEED datasets, the proposed approach provided an accuracy of up to 94.63% for emotion recognition, demonstrating its impressive performance compared with other existing methods.

  • Research Article
  • 10.3389/fnhum.2025.1445763
Cross-subject affective analysis based on dynamic brain functional networks.
  • Apr 14, 2025
  • Frontiers in human neuroscience
  • Lifeng You + 4 more

Emotion recognition is crucial in facilitating human-computer emotional interaction. To enhance the credibility and realism of emotion recognition, researchers have turned to physiological signals, particularly EEG signals, as they directly reflect cerebral cortex activity. However, due to inter-subject variability and non-smoothness of EEG signals, the generalization performance of models across subjects remains a challenge. In this study, we proposed a novel approach that combines time-frequency analysis and brain functional networks to construct dynamic brain functional networks using sliding time windows. This integration of time, frequency, and spatial domains helps to effectively capture features, reducing inter-individual differences, and improving model generalization performance. To construct brain functional networks, we employed mutual information to quantify the correlation between EEG channels and set appropriate thresholds. We then extracted three network attribute features-global efficiency, local efficiency, and local clustering coefficients-to achieve emotion classification based on dynamic brain network features. The proposed method is evaluated on the DEAP dataset through subject-dependent (trial-independent), subject-independent, and subject- and trial-independent experiments along both valence and arousal dimensions. The results demonstrate that our dynamic brain functional network outperforms the static brain functional network in all three experimental cases. High classification accuracies of 90.89% and 91.17% in the valence and arousal dimensions, respectively, were achieved on the subject-independent experiments based on the dynamic brain function, leading to significant advancements in EEG-based emotion recognition. In addition, experiments with each brain region yielded that the left and right temporal lobes focused on processing individual private emotional information, whereas the remaining brain regions paid attention to processing basic emotional information.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 12
  • 10.1186/s13634-024-01146-y
EEG emotion recognition based on differential entropy feature matrix through 2D-CNN-LSTM network
  • Apr 8, 2024
  • EURASIP Journal on Advances in Signal Processing
  • Teng Wang + 4 more

Emotion recognition research has attracted great interest in various research fields, and electroencephalography (EEG) is considered a promising tool for extracting emotion-related information. However, traditional EEG-based emotion recognition methods ignore the spatial correlation between electrodes. To address this problem, this paper proposes an EEG-based emotion recognition method combining differential entropy feature matrix (DEFM) and 2D-CNN-LSTM. In this work, first, the one-dimensional EEG vector sequence is converted into a two-dimensional grid matrix sequence, which corresponds to the distribution of brain regions of the EEG electrode positions, and can better characterize the spatial correlation between the EEG signals of multiple adjacent electrodes. Then, the EEG signal is divided into equal time windows, and the differential entropy (DE) of each electrode in this time window is calculated, it is combined with a two-dimensional grid matrix and differential entropy to obtain a new data representation that can capture the spatiotemporal correlation of the EEG signal, which is called DEFM. Secondly, we use 2D-CNN-LSTM to accurately identify the emotional categories contained in the EEG signals and finally classify them through the fully connected layer. Experiments are conducted on the widely used DEAP dataset. Experimental results show that the method achieves an average classification accuracy of 91.92% and 92.31% for valence and arousal, respectively. The method performs outstandingly in emotion recognition. This method effectively combines the temporal and spatial correlation of EEG signals, improves the accuracy and robustness of EEG emotion recognition, and has broad application prospects in the field of emotion classification and recognition based on EEG signals.

  • Research Article
  • Cite Count Icon 4
  • 10.1063/5.0098454
Cross-subject emotion recognition using visibility graph and genetic algorithm-based convolution neural network.
  • Sep 1, 2022
  • Chaos: An Interdisciplinary Journal of Nonlinear Science
  • Qing Cai + 4 more

An efficient emotion recognition model is an important research branch in electroencephalogram (EEG)-based brain-computer interfaces. However, the input of the emotion recognition model is often a whole set of EEG channels obtained by electrodes placed on subjects. The unnecessary information produced by redundant channels affects the recognition rate and depletes computing resources, thereby hindering the practical applications of emotion recognition. In this work, we aim to optimize the input of EEG channels using a visibility graph (VG) and genetic algorithm-based convolutional neural network (GA-CNN). First, we design an experiment to evoke three types of emotion states using movies and collect the multi-channel EEG signals of each subject under different emotion states. Then, we construct VGs for each EEG channel and derive nonlinear features representing each EEG channel. We employ the genetic algorithm (GA) to find the optimal subset of EEG channels for emotion recognition and use the recognition results of the CNN as fitness values. The experimental results show that the recognition performance of the proposed method using a subset of EEG channels is superior to that of the CNN using all channels for each subject. Last, based on the subset of EEG channels searched by the GA-CNN, we perform cross-subject emotion recognition tasks employing leave-one-subject-out cross-validation. These results demonstrate the effectiveness of the proposed method in recognizing emotion states using fewer EEG channels and further enrich the methods of EEG classification using nonlinear features.

  • Research Article
  • 10.3329/jsr.v17i3.78907
Robust EEG-Based Emotion Recognition using CNN: A High-Accuracy Approach with Differential Entropy Features and Spatial-Frequency Domain Analysis on the SEED Dataset
  • Sep 1, 2025
  • Journal of Scientific Research
  • A Kotwal + 3 more

The area of Human Emotion Recognition using EEG signals is rapidly evolving its dimensions at a more excellent pace and with time, it has become an important area of research for affective computing in the field of neuroscience. Neuro-computing has also shown its potential applications in the domain of mental health monitoring, brain-computer interface, and adaptive learning systems. The deep learning models have shown significant progress in producing effective results when implemented in analyzing different EEG signals. In this study, the efficiency of Convolutional Neural Network (CNN) models for emotion categorization is investigated on an EEG-based SEED dataset. Differential Entropy (DE) characteristics derived from five important EEG rhythms—delta, theta, alpha, beta, and gamma—are used as inputs to CNN classifiers. To enhance the performance, the model uses a two-dimensional (2D) tensor representation of the input, which allows the network to learn and use spatial correlations between different EEG channels. Experimental results show that the proposed CNN-based strategy outperforms previous methods with an average accuracy of 94.09 %. These findings highlight the potential of CNNs in developing robust and scalable solutions for EEG-based emotion recognition, providing a path for more intuitive and adaptive systems in future applications.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 108
  • 10.3390/s18082739
EEG-Based Emotion Recognition Using Quadratic Time-Frequency Distribution
  • Aug 20, 2018
  • Sensors (Basel, Switzerland)
  • Rami Alazrai + 3 more

Accurate recognition and understating of human emotions is an essential skill that can improve the collaboration between humans and machines. In this vein, electroencephalogram (EEG)-based emotion recognition is considered an active research field with challenging issues regarding the analyses of the nonstationary EEG signals and the extraction of salient features that can be used to achieve accurate emotion recognition. In this paper, an EEG-based emotion recognition approach with a novel time-frequency feature extraction technique is presented. In particular, a quadratic time-frequency distribution (QTFD) is employed to construct a high resolution time-frequency representation of the EEG signals and capture the spectral variations of the EEG signals over time. To reduce the dimensionality of the constructed QTFD-based representation, a set of 13 time- and frequency-domain features is extended to the joint time-frequency-domain and employed to quantify the QTFD-based time-frequency representation of the EEG signals. Moreover, to describe different emotion classes, we have utilized the 2D arousal-valence plane to develop four emotion labeling schemes of the EEG signals, such that each emotion labeling scheme defines a set of emotion classes. The extracted time-frequency features are used to construct a set of subject-specific support vector machine classifiers to classify the EEG signals of each subject into the different emotion classes that are defined using each of the four emotion labeling schemes. The performance of the proposed approach is evaluated using a publicly available EEG dataset, namely the DEAPdataset. Moreover, we design three performance evaluation analyses, namely the channel-based analysis, feature-based analysis and neutral class exclusion analysis, to quantify the effects of utilizing different groups of EEG channels that cover various regions in the brain, reducing the dimensionality of the extracted time-frequency features and excluding the EEG signals that correspond to the neutral class, on the capability of the proposed approach to discriminate between different emotion classes. The results reported in the current study demonstrate the efficacy of the proposed QTFD-based approach in recognizing different emotion classes. In particular, the average classification accuracies obtained in differentiating between the various emotion classes defined using each of the four emotion labeling schemes are within the range of –. Moreover, the emotion classification accuracies achieved by our proposed approach are higher than the results reported in several existing state-of-the-art EEG-based emotion recognition studies.

  • Research Article
  • Cite Count Icon 4
  • 10.1038/s41598-024-82705-z
EEG-based emotion recognition using multi-scale dynamic CNN and gated transformer
  • Dec 28, 2024
  • Scientific Reports
  • Zhuoling Cheng + 4 more

Emotions play a crucial role in human thoughts, cognitive processes, and decision-making. EEG has become a widely utilized tool in emotion recognition due to its high temporal resolution, real-time monitoring capabilities, portability, and cost-effectiveness. In this paper, we propose a novel end-to-end emotion recognition method from EEG signals, called MSDCGTNet, which is based on the Multi-Scale Dynamic 1D CNN and the Gated Transformer. First, the Multi-Scale Dynamic CNN is used to extract complex spatial and spectral features from raw EEG signals, which not only avoids information loss but also reduces computational costs associated with the time-frequency conversion of signals. Then, the Gated Transformer Encoder is utilized to capture global dependencies of EEG signals. This encoder focuses on specific regions of the input sequence while reducing computational resources through parallel processing with the improved multi-head self-attention mechanisms. Third, the Temporal Convolution Network is used to extract temporal features from the EEG signals. Finally, the extracted abstract features are fed into a classification module for emotion recognition. The proposed method was evaluated on three publicly available datasets: DEAP, SEED, and SEED_IV. Experimental results demonstrate the high accuracy and efficiency of the proposed method for emotion recognition. This approach proves to be robust and suitable for various practical applications. By addressing challenges posed by existing methods, the proposed method provides a valuable and effective solution for the field of Brain-Computer Interface (BCI).

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 32
  • 10.3389/fnhum.2023.1169949
STGATE: Spatial-temporal graph attention network with a transformer encoder for EEG-based emotion recognition
  • Apr 13, 2023
  • Frontiers in Human Neuroscience
  • Jingcong Li + 4 more

Electroencephalogram (EEG) is a crucial and widely utilized technique in neuroscience research. In this paper, we introduce a novel graph neural network called the spatial-temporal graph attention network with a transformer encoder (STGATE) to learn graph representations of emotion EEG signals and improve emotion recognition performance. In STGATE, a transformer-encoder is applied for capturing time-frequency features which are fed into a spatial-temporal graph attention for emotion classification. Using a dynamic adjacency matrix, the proposed STGATE adaptively learns intrinsic connections between different EEG channels. To evaluate the cross-subject emotion recognition performance, leave-one-subject-out experiments are carried out on three public emotion recognition datasets, i.e., SEED, SEED-IV, and DREAMER. The proposed STGATE model achieved a state-of-the-art EEG-based emotion recognition performance accuracy of 90.37% in SEED, 76.43% in SEED-IV, and 76.35% in DREAMER dataset, respectively. The experiments demonstrated the effectiveness of the proposed STGATE model for cross-subject EEG emotion recognition and its potential for graph-based neuroscience research.

  • Research Article
  • Cite Count Icon 16
  • 10.3390/diagnostics12102508
Use of Differential Entropy for Automated Emotion Recognition in a Virtual Reality Environment with EEG Signals.
  • Oct 16, 2022
  • Diagnostics
  • Hakan Uyanık + 4 more

Emotion recognition is one of the most important issues in human–computer interaction (HCI), neuroscience, and psychology fields. It is generally accepted that emotion recognition with neural data such as electroencephalography (EEG) signals, functional magnetic resonance imaging (fMRI), and near-infrared spectroscopy (NIRS) is better than other emotion detection methods such as speech, mimics, body language, facial expressions, etc., in terms of reliability and accuracy. In particular, EEG signals are bioelectrical signals that are frequently used because of the many advantages they offer in the field of emotion recognition. This study proposes an improved approach for EEG-based emotion recognition on a publicly available newly published dataset, VREED. Differential entropy (DE) features were extracted from four wavebands (theta 4–8 Hz, alpha 8–13 Hz, beta 13–30 Hz, and gamma 30–49 Hz) to classify two emotional states (positive/negative). Five classifiers, namely Support Vector Machine (SVM), k-Nearest Neighbor (kNN), Naïve Bayesian (NB), Decision Tree (DT), and Logistic Regression (LR) were employed with DE features for the automated classification of two emotional states. In this work, we obtained the best average accuracy of 76.22% ± 2.06 with the SVM classifier in the classification of two states. Moreover, we observed from the results that the highest average accuracy score was produced with the gamma band, as previously reported in studies in EEG-based emotion recognition.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon