Abstract

Electroencephalogram (EEG) could directly reflect human brain activities. Recently, EEG-based emotion recognition technology has attracted widespread attention. However, although most of the existing methods can extract temporal and spatial features of EEG signals, the extraction process of the time-series information and spatial features is separate from each other. Some spatial features may be lost when extracting time-series information, and vice versa. Additionally, the non-Euclidean representations between the EEG electrodes cannot be ignored. In response to these issues, we construct a convolutional gated recurrent unit-driven multidimensional dynamic graph neural network (CGRU-MDGN), which mines its local spatial characteristics of EEG signals while learning the temporal information and captures non-Euclidean spatial features between EEG channels to classify emotions more accurately. Specifically, we substitute the fully connected layers of the GRU with convolution layers, which can take into account both the local spatial features and time-series information of EEG signals. Next, multiple non-Euclidean spatial representations are generated using multidimensional dynamic graph convolution. Furthermore, the characteristics of different dimensions or views are merged through the adaptive-weighted summation technique to realize the maximized feature extraction as much as possible. Comprehensive experiments reveal that the constructed CGRU-MDGN could achieve competitive recognition results on subject-independent emotion recognition tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call