Abstract

Electroencephalography (EEG)-based emotion computing has become one of the research hotspots of human-computer interaction (HCI). However, it is difficult to effectively learn the interactions between brain regions in emotional states by using traditional convolutional neural networks because there is information transmission between neurons, which constitutes the brain network structure. In this paper, we proposed a novel model combining graph convolutional network and convolutional neural network, namely MDGCN-SRCNN, aiming to fully extract features of channel connectivity in different receptive fields and deep layer abstract features to distinguish different emotions. Particularly, we add style-based recalibration module to CNN to extract deep layer features, which can better select features that are highly related to emotion. We conducted two individual experiments on SEED data set and SEED-IV data set, respectively, and the experiments proved the effectiveness of MDGCN-SRCNN model. The recognition accuracy on SEED and SEED-IV is 95.08 and 85.52%, respectively. Our model has better performance than other state-of-art methods. In addition, by visualizing the distribution of different layers features, we prove that the combination of shallow layer and deep layer features can effectively improve the recognition performance. Finally, we verified the important brain regions and the connection relationships between channels for emotion generation by analyzing the connection weights between channels after model learning.

Highlights

  • DGCN-SRCNN for EEG-Based Emotion Recognition come in many forms, which can be recognized by human facial expressions (Harit et al, 2018), body movements (Ajili et al, 2019), and physiological signals (Goshvarpour and Goshvarpour, 2019; Valderas et al, 2019)

  • The MDGCN-SRCNN model proposed in this paper is compared with the latest methods such as Support Vector Machine (SVM), Deep Belief Network (DBN), dynamic graph convolution network (DGCNN), regularized graph neural network (RGNN), GCB-net, STRNN, and BiHDM

  • We propose a multi-layer dynamic graph convolutional network-style-based recalibration convolutional neural network (MDGCN-SRCNN) model for EEG-based emotion recognition

Read more

Summary

Introduction

Human emotion is a state that reflects the complex mental activities of human beings. The prerequisite for realizing human-computer emotional interaction is to recognize human emotional state in real time. DGCN-SRCNN for EEG-Based Emotion Recognition come in many forms, which can be recognized by human facial expressions (Harit et al, 2018), body movements (Ajili et al, 2019), and physiological signals (Goshvarpour and Goshvarpour, 2019; Valderas et al, 2019). Humans can control their facial expressions, body movements to hide or disguise their emotions, and physiological signals such as electroencephalogram, electrocardiogram, and electromyography have the advantage of being difficult to hide or disguise. With the rapid development of non-invasive, portable, and inexpensive EEG acquisition equipment, EEG-based emotion recognition has attracted the attention of researchers

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call