Abstract
Deep neural networks are a powerful model for feature extraction. They produce features that enable state-of-the-art performance on many tasks, including emotion categorization. However, their homogeneous representation of knowledge has made them prone to attacks, i.e., small modification in train or test data to mislead the models. Emotion categorization can usually be performed to be either in-distribution (train and test with the same dataset) or out-of-distribution (train on one or more dataset(s) and test on a different dataset). Our already developed landmark-based technique, which is robust for in-distribution improvement against attacks in emotion categorization, could translate to out-of-distribution classification problems. This is important as different databases might have different variations such as in color or level of expressiveness of emotion. We compared the landmark-based method with four state-of-the-art deep models (EfficientNetB0, InceptionV3, ResNet50, and VGG19), as well as emotion categorization tools (i.e., Python Facial Expression Analysis Toolbox and the Microsoft Azure face application programming interface) by performing a cross-database experiment across six commonly used databases, i.e., extended Cohn–Kanade, Japanese female facial expression, Karolinska directed emotional faces, National Institute of Mental Health Child Emotional Faces Picture Set, real-world affective faces, and psychological image collection at Stirling databases. The landmark-based method has achieved a significantly higher accuracy, achieving an average of 47.44% compared with most of the deep networks ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$< $</tex-math></inline-formula> 36%) and the emotion categorization tools ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$< $</tex-math></inline-formula> 37%) with considerably less execution time. This highlights that out-of-distribution emotion categorization is a much harder task due to detecting underlying emotional cues than emotion categorization in-distribution where superficial patterns are detected to <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$>$</tex-math></inline-formula> 97% accuracy. <p xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><i>Impact Statement</i>—Recognising emotions from people's faces has real-world applications for computer-based perception as it is often vital for interpersonal communication. Emotion recognition tasks nowadays are addressed using deep learning models that model colour distribution so classify images rather than emotion. This homogeneous knowledge representation is in contrast to emotion categorization, which is hypothesised as more heterogeneous landmark-based. This is investigated through out-of-distribution emotion categorization problems, where the test samples are drawn from a different dataset to training images. Our landmark-based method achieves a significantly higher classification performance (on average) compared with four state-of-the-art deep networks (EfficientNetB0, InceptionV3, ResNet50 and VGG19), as well as other emotion categorization tools such as Py-Feat and the Azure Face API. We conclude that this improved generalization is relevant for future developments of emotion categorization tools.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.