Abstract

Obtaining a valid facial expression recognition (FER) method is still a research hotspot in the artificial intelligence field. In this paper, we propose a multiparameter fusion feature space and decision voting-based classification for facial expression recognition. First, the parameter of the fusion feature space is determined according to the cross-validation recognition accuracy of the Multiscale Block Local Binary Pattern Uniform Histogram (MB-LBPUH) descriptor filtering over the training samples. According to the parameters, we build various fusion feature spaces by employing multiclass linear discriminant analysis (LDA). In these spaces, fusion features composed of MB-LBPUH and Histogram of Oriented Gradient (HOG) features are used to represent different facial expressions. Finally, to resolve the inconvenient classifiable pattern problem caused by similar expression classes, a nearest neighbor-based decision voting strategy is designed to predict the classification results. In experiments with the JAFFE, CK+, and TFEID datasets, the proposed model clearly outperformed existing algorithms.

Highlights

  • Facial expressions, as a form of nonverbal communication, convey social information among humans and are regarded as an emotional measurement that can be used to understand human actions and behaviors [1]

  • Most research work on facial expression recognition (FER) aimed at achieving perfect expression recognition (ER) accuracy

  • Before feature extraction, the images were only preprocessed by cropping and resizing, without conducting any other image preprocessing. e highlight of the paper is the MB-LBPUH parameter selection

Read more

Summary

Introduction

As a form of nonverbal communication, convey social information among humans and are regarded as an emotional measurement that can be used to understand human actions and behaviors [1]. In the computer vision field, the recognition of static-based and dynamicbased facial expressions is widely used in various applications, such as e-learning [2], driver drowsiness estimation [3], and pain assessment [4]. Facial expression recognition (FER) has four crucial steps: face detection, face image preprocessing, facial feature extraction, and classification [5]. E well-known Facial Action Coding System (FACS) was first proposed by Ekman and Friesen [6]. FACS is a facial expression coding system that postulates six primary emotions that are composed of a set of facial muscle action units (AUs). Each expression is represented by a particular combination of specific AUs. the unit modules are complex and the facial expression features are selected by manual intervention to some extent. Automatic feature point location and feature extraction methods have followed

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call