Abstract

Facial expression is an unspoken message essential to collaboration and effective discourse. An inner emotional state of a human is expressed using facial expressions and is very effective for communication with actual emotions. Anger, happiness, sadness, contempt, surprise, fear, disgust, and neutral are eight common expressions of humans. Scientific community proposed several face emotion recognition techniques. However, due to fewer face landmarks and their intensity for deep learning models, performance improvement for facial expression recognition still needs to be improved for accurately predicting facial emotion recognition. This study proposes a zoning-based face expression recognition (ZFER) to locate more face landmarks to perceive deep face emotions indemnity through zoning. After face extraction, landmarks from the face, such as the eyes, eyebrows, nose, forehead, and mouth, are extracted. The second step is zoning each landmark into four regions and zone-based face landmarks are passed to the VGG-16 model to generate a feature map. Finally, the feature map is given as input to fully connected neural network (FCNN) to classify facial emotions into multiple classes. Various experiments are performed on facial expression recognition (FER) 2013 and CK+ datasets to evaluate our proposed model with state-of-the-art facial expression recognition approaches using performance assessment metrics like accuracy. The accuracy of the proposed method with face features on CK+ and FER2013 are 98.4% and 65%, respectively. The experimental zoning results improve from 98.47% to 98.74% on the CK+ dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call