Abstract

Facial expressions reflect people’s feelings, emotions, and motives, attracting researchers to develop a self-acting automatic facial expression recognition system. With the advances of deep learning frameworks for automatic facial expression recognition, the model complexity, limited training samples, and subtle micro facial muscle movements make the facial emotion expression system challenging. This research proposed a deep learning framework using fine-grained facial action unit detection to identify facial activity, behavior, and mood and recognize a person’s emotions based on these individual patterns. The proposed facial expression recognition system involves pre-processing, feature representation and normalization, hyper-parameter tuning, and classification. Here, two different convolutional neural network models have been introduced because of feature learning and representation, followed by classification. Various advanced feature representation methods, such as image augmentation, matrix normalization, fine-tuning, and transfer learning methods, have been applied to improve the performance of the proposed work. The proposed work’s performance and efficiency are evaluated under different approaches. The proposed work has been tested on standard Static Facial Expressions in the Wild, short name SFEW 1.0, SFEW 2.0, and Indian Movie Face (IMFDB) benchmark databases. The performances of the proposed system due to these databases are 48.15%, 80.34%, and 64.17%, respectively. The quantitative analysis of these results is compared with the standard existing state-of-the-art methods that show the proposed model outperforms the other competing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call