Abstract

The paper proposes a new solution for recognizing the emotional state of a person (joy, surprise, sadness, anger, disgust, fear, and neutral state) by facial expression. Along with traditional verbal communication, emotions play a significant role in determining true intentions during a communicative act in various areas. There is a large number of models and algorithms for recognizing human emotions by class and applying them to accompany a communicative act. The known models show a low accuracy in recognizing emotional states. To classify facial expressions, two classifiers were built and implemented in the Keras library (ResNet50, MobileNet) and a new architecture of a convolutional neural network classifier was proposed. The classifiers were trained on the FER 2013 dataset. Comparison of the results for the chosen classifiers showed that the proposed model has the best result in terms of validation accuracy (60.13 %) and size (15.49 MB), while the loss function is 0.079 for accuracy and 2.80 for validation. The research results can be used to recognize signs of stress and aggressive human behavior in public service systems and in areas characterized by the need to communicate with a large number of people.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.