Abstract

In this paper, an approach for Facial Expressions Recognition (FER) based on a multi-facial patches (MFP) aggregation network is proposed. Deep features are learned from facial patches using convolutional neural sub-networks and aggregated within one architecture for expression classification. Besides, a framework based on two data augmentation techniques is proposed to expand FER labels training datasets. Consequently, the proposed shallow convolutional neural networks (CNN) based approach does not need large datasets for training. The proposed framework is evaluated on three FER datasets. Results show that the proposed approach achieves state-of-art FER deep learning approaches performance when the model is trained and tested on images from the same dataset. Moreover, the proposed data augmentation techniques improve the expression recognition rate, and thus can be a solution for training deep learning FER models using small datasets. The accuracy degrades significantly when testing for dataset bias. A fine-tuning can overcome the problem of transition from laboratory-controlled conditions to in-the-wild conditions. Finally, the emotional face is mapped using the MFP-CNN and the contribution of the different facial areas in displaying emotion as well as their importance in the recognition of each facial expression are studied.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.