Abstract

The primary goal of this paper is to describe a proposed framework for identifying human face expressions. A methodology has been proposed and developed to identify facial emotions using an axes-angular feature extracted from facial landmarks for 4D dynamic facial expression video data. The 4D facial expression recognition (FER) problem is modeled as an unbalanced problem using the full video sequence. The proposed dataset includes landmarks that are positioned to be fiducial features: around the brows, eyes, nose, cheeks, and lips. Following the initial facial landmark preprocessing, feature extraction is carried out. Input feature vectors from gamma axes and magnitudes in three-dimensional Euclidean space are constructed. This paper develops a new feature vector by concatenating the angle and magnitude features and compares them to a model created independently using angle features. In all three models, several filter-based feature selection techniques are used to estimate feature importance. The framework employs a multi-class support vector machine (SVM) on MATLAB, and the true positive rate (TPR) and average recognition rates (ARR) are used as performance metrics. In terms of average classification accuracy, the final results are compared to conventional state-of-the-art approaches. The results revealed a highly informative combined feature, demonstrating the efficiency of the proposed landmark-based strategy to classify facial expressions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call