Abstract

Deep learning techniques have recently drawn a lot of attention towards automatic facial emotion recognition (FER) applications such as diagnosis of psychiatric disorders, mental health monitoring, and other human–computer interaction applications. But the models that have been developed, have still not been generalized due to the absence of substantial emotion datasets and high operational costs. To address these issues, we propose a novel technique called facial emotion recognition using fine-tuned MobileNetV2 architecture (FERFM). It uses the concept of transfer learning to improve the performance of FER systems for mobile devices. It transfers the knowledge from pre-trained ImageNet dataset to profile view RGB-KDEF dataset for diversity among learnt features. Initially, data descriptive analysis is performed on input KDEF images to automate the necessary knowledge. Then the pipeline strategy is introduced, where the pre-trained MobileNetV2 architecture is fine tuned by eliminating the last six layers and adding dropout regularization, max pooling and dense layer. To test the performance of the proposed work, we ran several tests using the KDEF-RGB dataset, FER13 dataset and real-time facial expression images to classify the results into seven discrete categories of emotions. It was also verified on VGG-16 pretrained model and demonstrated remarkable performance comparatively. Proposed FERFM achieves 85.7% accuracy and requires 1,510,599 number of trainable parameters, 43 ms running time per image that proves the enhancement of FER performance and outperforms the existing state of art methods. Also, the ability to achieve low operational costs while maintaining high accuracy demonstrates proficiency in mobile applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.