Abstract
This paper considers a hybrid multimodal model for improvement of the human emotion recognition based on facial expression and body gesture recognition. The paper extends the author’s investigations related to the usage of pre-trained models of deep learning neural networks (DNN) for facial emotion recognition (FER) with addition of the emotions extracted from body language. In order to extract emotions from upper body gestures a second model of DNN was developed and trained with specific datasets. The information regarding recognized emotions obtained by both models is more accurate and can be used in education, medicine, psychology, product advertisement, marketing, human-machine interfaces, etc. In our case, it aims to personalize the lecture material of the students during their online training, taking into account their emotional state.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.