Abstract

Emotion categorization has become an important area of research due to the increasing number of intelligent systems, such as robots interacting with humans. This includes deep learning models, which have performed remarkably well on many classification-based tasks. However, due to their homogeneous representation of knowledge, the deep learning models are vulnerable to different kinds of attacks. The hypothesis is that emotions displayed in facial images are more than patterns of pixels. Thus, the objective of this work is to propose a novel heterogeneous facial landmark-based emotion categorization (LEmo) method that will show robustness to distractor and adversarial attacks. Moreover, we compared the proposed LEmo method with seven state-of-the-art methods, including neural networks, (i.e. the residual neural network (ResNet), the Visual Geometry Group (VGG), and the Inception-ResNet models), emotion categorization tools (i.e. Py-Feat and LightFace), as well as anti-attack-based methods (i.e. Adv-Network and DLP-CNN). To test the robustness of the LEmo method, three different types of adversarial attacks, and a distractor attack were launched at the data. Unlike other methods that have exhibited large performance decreases (up to 79%), the LEmo method was strongly resistant to all attacks, achieving high accuracy with only little (< 9.3%) or no decrease with different changes made to the images of CK+ and the KDEF databases. Furthermore, the LEmo method has shown a considerably lower execution time compared to all other methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.