Abstract

Current research on facial action unit (AU) recognition typically requires fully AU-annotated facial images. Compared to facial expression labeling, AU annotation is a time-consuming, expensive, and error-prone process. Inspired by dual learning, we propose a novel weakly supervised dual learning mechanism to train facial action unit classifiers from expression-annotated images. Specifically, we consider AU recognition from facial images as the main task, and face synthesis given AUs as the auxiliary task. For AU recognition, we force the recognized AUs to satisfy the expression-dependent and expression-independent AU dependencies, i.e., the domain knowledge about expressions and AUs. For face synthesis given AUs, we minimize the difference between the synthetic face and the ground truth face, which has identical recognized and given AUs. By optimizing the dual tasks simultaneously, we successfully leverage their intrinsic connections as well as domain knowledge about expressions and AUs to facilitate the learning of AU classifiers from expression-annotated image. Furthermore, we extend the proposed weakly supervised dual learning mechanism to semi-supervised dual learning scenarios with partially AU-annotated images. Experimental results on three benchmark databases demonstrate the effectiveness of the proposed approach for both tasks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.