Abstract

Detecting and analyzing emotions from human facial movement is a problem defined and developed over many years for the benefits it brings. Facial expression is a crux of the human-computer interaction (HCI) research area. Researchers are exploring its application in security, medical science, and to know the behavior of a person or community. In this paper, we have proposed a deep learning-based framework using transfer learning for facial expression. This approach uses the existing VGG16 model for a modified trained model and concatenates additional layers on it. VGG16 model is already trained on ImageNet, which has 1000 classes. After this, the model has been verified on CK+, JAFFE benchmark datasets. Extended Cohn-Kanade (CK+) and Japanese Female Facial Expression (JAFFE) are popular Facial Expression Dataset. The proposed model has shown 94.8% accuracy on CK+ and 93.7% on the JAFFE dataset and found superior to existing techniques. We have implemented proposed technique on Google Colab-GPU that has helped us to process these data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.