Graph neural networks (GNNs) have demonstrated efficient processing of graph-structured data, making them a promising method for electroencephalogram (EEG) emotion recognition. However, due to dynamic functional connectivity and nonlinear relationships between brain regions, representing EEG as graph data remains a great challenge. To solve this problem, we proposed a multi-domain based graph representation learning (MD 2GRL) framework to model EEG signals as graph data. Specifically, MD 2GRL leverages gated recurrent units (GRU) and power spectral density (PSD) to construct node features of two subgraphs. Subsequently, the self-attention mechanism is adopted to learn the similarity matrix between nodes and fuse it with the intrinsic spatial matrix of EEG to compute the corresponding adjacency matrix. In addition, we introduced a learnable soft thresholding operator to sparsify the adjacency matrix to reduce noise in the graph structure. In the downstream task, we designed a dual-branch GNN and incorporated spatial asymmetry for graph coarsening. We conducted experiments using the publicly available datasets SEED and DEAP, separately for subject-dependent and subject-independent, to evaluate the performance of our model in emotion classification. Experimental results demonstrated that our method achieved state-of-the-art (SOTA) classification performance in both subject-dependent and subject-independent experiments. Furthermore, the visualization analysis of the learned graph structure reveals EEG channel connections that are significantly related to emotion and suppress irrelevant noise. These findings are consistent with established neuroscience research and demonstrate the potential of our approach in comprehending the neural underpinnings of emotion.
Read full abstract