To address the variabilities of the number and position of extracted feature points for the traditional scale-invariant feature transform (SIFT) method, an improved SIFT algorithm is proposed for robust emotion recognition. Specifically, shape decomposition is first performed on the detected facial images by defining a weight vector. Then, a feature point constraint algorithm is developed to determine the optimum position of the feature points that can effectively represent the expression change regions. On this basis, the SIFT descriptors are applied to extract the regional gradient information as feature parameters. Finally, the support vector machine classifier combined with the principal component analysis method is used to reduce the feature dimensions and facial expression recognition. Experiments have been performed under different conditions, i.e., varied illuminations, face poses and facial moisture levels, using 15 participants. In the cases of frontal face and 5-degree face rotation views, the average recognition accuracies are 98.52% and 94.47% (no additional light sources), as well as 96.97% and 95.40% (two additional light sources), respectively. In addition, as an effective supplement to the problem of changes in illumination, the average recognition ratios are 96.23% and 96.20% under dry and wet face conditions, respectively. The experimental results reveal the robust performance of the proposed method in facial expression recognition.
Read full abstract