Abstract

Facial Expression Recognition (FER) is an important topic that is used in many areas. FER categorizes facial expressions according to human emotions. Most networks are designed for facial emotion recognition but still have some problems, such as performance degradation and the lowest classification accuracy. To achieve greater classification accuracy, this paper proposes a new Leaky Rectified Triangle Linear Unit (LRTLU) activation function based on the Deep Convolutional Neural Network (DCNN). The input images are pre-processed using the new Adaptive Bilateral Filter Contourlet Transform (ABFCT) filtering algorithm. The face is then detected in the filtered image using the Chehra face detector. From the detected face image, facial landmarks are extracted using a cascading regression tree, and important features are extracted based on the detected landmarks. The extracted feature set is then passed as input to the Leaky Rectified Triangle Linear Unit Activation Function Based Deep Convolutional Neural Network (LRTLU-DCNN), which classifies the input image expressions into six emotions, such as happiness, sadness, neutrality, anger, disgust, and surprise. Experimentation of the proposed method is carried out using the Extended Cohn-Kanade (CK+) and Japanese Female Facial Expression (JAFFE) datasets. The proposed work provides a classification accuracy of 99.67347% for the CK+ dataset along with 99.65986% for the JAFFE dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call