Abstract

With the growing service robot market, emotion recognition has attracted more and more attention in both academia and industry. Many research works have been done on emotion recognition based on facial expression and/or voice. Very few researches exploited the use of head postures in the emotion recognition algorithms. This is in contrast to the fact that body gestures are used as one of the important indicators of emotional states during our daily interpersonal interaction. In this paper, we investigate the emotional meaning of head postures. Using broad learning system, we have built an emotion recognition model based on features extracted from two modalities: facial expression and head posture. The model is simple in structure and takes much less training time compared to deep-learning algorithms. Experimental results on a public emotional video datasets show that this bi-modal emotion recognition model improves the recognition accuracy of passive emotional states, such as ‘sad’, ’worried’, compared to the BLS model based on single modality of facial expression.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call