Abstract

Facial expression recognition (FER) has a wide range of applications, including interactive gaming, healthcare, security, and human‐computer interaction systems. Despite the impressive performance of FER based on deep learning methods, it remains challenging in real‐world scenarios due to uncontrolled factors such as varying lighting conditions, face occlusion, and pose variations. In contrast, humans are able to categorize objects based on both their inherent characteristics and the surrounding environment from a cognitive standpoint, utilizing concepts such as cognitive relativity. Modeling the cognitive relativity laws to learn cognitive features as feature augmentation may improve the performance of deep learning models for FER. Therefore, we propose a cognitive feature learning framework to learn cognitive features as complementary for FER, which consists of Relative Transformation module (AFRT) and Graph Convolutional Network module (AFGCN). AFRT explicitly creates cognitive relative features that reflect the position relationships between the samples based on human cognitive relativity, and AFGCN implicitly learns the interaction features between expressions as feature augmentation to improve the classification performance of FER. Extensive experimental results on three public datasets show the universality and effectiveness of the proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.