Abstract

Recently, there has been a huge demand for assistive technology for industrial, commercial, automobile, and societal applications. In some of these applications, there is a requirement of an efficient and accurate system for automatic facial expression recognition (FER). Therefore, FER has gained enormous interest among computer vision researchers. Although there has been a plethora of work available in the literature, an automatic FER system has not yet reached the desired level of robustness and performance. In most of these works, there has been the dominance of appearance-based methods primarily consisting of local binary pattern (LBP), local directional pattern (LDP), local ternary pattern (LTP), gradient local ternary pattern (GLTP), and improved local ternary pattern (IGLTP). Keeping in view the popularity of appearance-based methods, in this paper, we have proposed an appearance-based descriptor called Improved Adaptive Local Ternary Pattern (IALTP) for automatic FER. This new descriptor is an improved version of ALTP, which has been proved to be effective in face recognition. We have investigated ALTP in more details and have proposed some improvements like the use of uniform patterns and dimensionality reduction via principal component analysis (PCA). The reduced features are then classified using kernel extreme learning machine (K-ELM) classifier. In order to validate the performance of the proposed method, experiments have been conducted on three different FER datasets using well-known evaluation measures such as accuracy, precision, recall, and F1-Score. The proposed approach has also been compared with some of the state-of-the-art works in literature and found to be more accurate and efficient.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call