Abstract

The expression of human emotions is a complex process that often manifests through physiological and psychological traits and results in spatio-temporal brain activity. The brain activity can be captured with an electroencephalogram (EEG) and can be used for emotion recognition. In this paper, we present a novel approach to EEG-based emotion recognition (in terms of arousal, valence, and dominance) using unprocessed EEG signals. Input EEG samples are passed through channel-specific encoders consisting of SincNet based convolution blocks (filters are fine-tuned for the emotion recognition during learning) to learn high-level features related to the objectives. The resultant feature embeddings are then passed through a set of graph convolution networks to model the spatial propagation of brain activity under the assumption that the brain activity captured through an electrode is impacted by the brain activity captured by neighbouring electrodes. The channels are represented as nodes in a graph following the relative positioning of the electrodes during dataset acquisition. Multi-head attention is applied together with the graph convolutions to jointly attend to features from different representation sub-spaces, which leads to improved learning. The resultant features are then passed through a deep neural network-based multi-task classifier to identify the dimensional emotional states (low/high). Our proposed model achieves an accuracy of 88.24%, 88.80% and 88.22% for arousal, valence and dominance respectively using a 10-fold cross-validation; and 63.71%, 64.98% and 61.81% with Leave-One-Subject-Out cross-validation (LOSO) on the Dreamer dataset, and 69.72%, 69.43% and 70.72% for a LOSO evaluation on the DEAP dataset, surpassing state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.