Abstract

Understanding the expression of human emotional states plays a prominent role in interactive multimodal interfaces, affective computing, and the healthcare sector. Emotion recognition through electroencephalogram (EEG) signals is a simple, inexpensive, compact, and precise solution. This paper proposes a novel four-stage method for human emotion recognition using multivariate EEG signals. In the first stage, multivariate variational mode decomposition (MVMD) is employed to extract an ensemble of multivariate modulated oscillations (MMOs) from multichannel EEG signals. In the second stage, multivariate time–frequency (TF) images are generated using joint instantaneous amplitude (JIA), and joint instantaneous frequency (JIF) functions computed from the extracted MMOs. In the next stage, deep residual convolutional neural network ResNet-18 is customized to extract hidden features from the TF images. Finally, the classification is performed by the softmax layer. To further evaluate the performance of the model, various machine learning (ML) classifiers are employed. The feasibility and validity of the proposed method are verified using two different public emotion EEG datasets. The experimental results demonstrate that the proposed method outperforms the state-of-the-art emotion recognition methods with the best accuracy of 99.03, 97.59, and 97.75 percent for classifying arousal, dominance, and valence emotions, respectively. Our study reveals that TF-based multivariate EEG signal analysis using a deep residual network achieves superior performance in human emotion recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call