Abstract

Understanding human facial expressions is one of the key steps towards achieving human–computer interaction. Owing to the anatomic mechanism that governs facial muscular interactions, there exist powerful dependencies between expressions and action units (AUs) that are useful for exploiting such rules of knowledge to guide the model learning process. However, they have not yet been represented directly and integrated into a network. In this study, we propose a novel method for facial expressions and AUs recognition based on their dependencies on graph convolutional network. First, we train the conditional generative adversarial network to filter out identity information and extract expression information through a de-expression learning procedure. Thereafter, we apply graph convolutional network to represent dependency laying among AU nodes and embed the nodes by dividing the expression component into multi patches, corresponding to the AU-related regions. Finally, we use prior knowledge matrices to represent the dependencies between expressions and AUs and subsequently integrate them into a loss function to constrain the model. The results of our experiments indicate that such representation is effective for improving the recognition rate. They also reveal that our work achieves better performance than several popular approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.