Abstract

Emotion recognition has attracted a lot of attention in recent years. It is widely used in health care, teaching, human-computer interaction, and other fields. Various human emotional features are often used to recognize different emotions. Currently, there is more and more research on multimodal emotion recognition based on the fusion of multiple features. This paper proposes a deep learning model of multimodal emotion recognition based on the fusion of electroencephalogram (EEG) signals and facial expressions to achieve an excellent classification effect. First, the pre-trained convolution neural network (CNN) is used to extract the facial features from the facial expressions. Next, the attention mechanism is introduced to extract more critical facial frame features. Then, we apply CNNs to extract spatial features from original EEG signals, which use a local convolution kernel and a global convolution kernel to learn the features of left and right hemispheres channels and all EEG channels. After feature-level fusion, the fusion features of the facial expression features and EEG features are fed into the classifier for emotion recognition. This paper conducted experiments on the DEAP and MAHNOB-HCI datasets to evaluate the performance of the proposed model. The accuracy of valence dimension classification is 96.63%, and arousal dimension classification is 97.15% on the DEAP dataset, while 96.69% and 96.26% on the MAHNOB-HCI dataset. The experimental results show that the proposed model can effectively carry out emotion recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call