Abstract

Graph convolutional networks (GCNs) have shown great prowess in learning topological relationships among electroencephalogram (EEG) channels for EEG-based emotion recognition. However, most existing GCN-only methods are designed with a single spatial pattern, lacking connectivity enhancement within local functional regions and ignoring the data dependencies of EEG original data. In this article, hierarchical dynamic GCN (HD-GCN) is proposed to explore dynamic multilevel spatial information among EEG channels, with discriminative features of EEG signals as auxiliary information. Specifically, representation learning in topological space consists of two branches: one for extracting global dynamic information and one for exploring augmentation information in local functional regions. In each branch, a layerwise adjacency matrix is utilized to enrich the expressive power of GCN. Furthermore, a data-dependent auxiliary information module (AIM) is developed to capture multidimensional fusion features. Extensive experiments on two public datasets, SJTU emotion EEG dataset (SEED) and DREAMER, demonstrate that the proposed method consistently exceeds state-of-the-art methods. Interpretability analysis of the proposed model is performed, discovering the active brain regions and important electrode pairs related to emotion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call