Abstract

Emotion recognition has received increasing attention in human-computer interaction (HCI) and psychological assessment. Compared with single modal emotion recognition, the multimodal paradigm has an outperformance because of introducing complementary information for emotion recognition. However, current research is mainly focused on normal people, the deaf subjects need to understand emotional changes in real life. In this paper, we propose a multimodal continuous emotion recognition method for deaf subjects based on facial expression and electroencephalograph (EEG) signals. Twelve emotion movie clips as stimulus were selected and annotated by ten postgraduates who majored in psychology. The EEG signals and facial expressions of deaf subjects were collected when they watched the stimulus clips. The differential entropy (DE) features of EEG were extracted by time-frequency analysis and the six facial features were extracted by facial landmark. Long short-term memory networks (LSTM) were utilized to accomplish the feature level fusion and captured the temporal dynamics of emotions. The result shows that the EEG signal is better than the dynamic emotional capture of facial expressions and deaf subjects in continuous emotion recognition. Multi-modality can compensate for the information of a single modality, which achieves a better performance. Finally, from the neural activities of deaf subjects, the result reveals that the prefrontal lobe region may be strongly related to negative emotion processing, and the lateral temporal lobe region may be strongly related to positive emotion processing.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.