Abstract

Despite the rich literature on facial expressions recognition, little works have considered the fusion of features extracted in multiple domains. This paper proposes a multi-domain facial emotion recognition framework. The approach extracts simultaneously local facial features from the spatial domain (geometric features), the frequency domain (local wavelets features) and the spatio-temporal domain (Local Binary Patterns on Three Orthogonal Planes (LBP-TOP)). A feed-forward neural networks and support vector machines (SVM) models were trained on each of the extracted features to assign one of the seven basic affective states. A late fusion was then performed to characterize the final facial expression. Experiments were conducted on SAVEE database. Classification results show the complementary effect of the proposed multi-domain features with an overall accuracy of 99.18 % which outperforms state-of-the-art works.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call