Abstract

Knowledge about people's emotions can serve as an important context for automatic service delivery in context-aware systems. Hence, human facial expression recognition (FER) has emerged as an important research area over the last two decades. To accurately recognize expressions, FER systems require automatic face detection followed by the extraction of robust features from important facial parts. Furthermore, the process should be less susceptible to the presence of noise, such as different lighting conditions and variations in facial characteristics of subjects. Accordingly, this work implements a robust FER system, capable of providing high recognition accuracy even in the presence of aforementioned variations. The system uses an unsupervised technique based on active contour model for automatic face detection and extraction. In this model, a combination of two energy functions: Chan---Vese energy and Bhattacharyya distance functions are employed to minimize the dissimilarities within a face and maximize the distance between the face and the background. Next, noise reduction is achieved by means of wavelet decomposition, followed by the extraction of facial movement features using optical flow. These features reflect facial muscle movements which signify static, dynamic, geometric, and appearance characteristics of facial expressions. Post-feature extraction, feature selection, is performed using Stepwise Linear Discriminant Analysis, which is more robust in contrast to previously employed feature selection methods for FER. Finally, expressions are recognized using trained HMM(s). To show the robustness of the proposed system, unlike most of the previous works, which were evaluated using a single dataset, performance of the proposed system is assessed in a large-scale experimentation using five publicly available different datasets. The weighted average recognition rate across these datasets indicates the success of employing the proposed system for FER.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call