Abstract

Facial Action Unit (AU) recognition is a challenging problem, where the subtle muscle movement brings diverse AU representations. Recently, AU relations are utilized to assist AU recognition and improve the understanding of AUs. Nevertheless, simply adopting the regular Bayesian networks or the relations between AUs and emotions is not enough for modelling complex AU relations. To provide quantitative measurement of AU relations using the knowledge from FACS, we propose an AU relation quantization autoencoder. Moreover, to cope with the diversity of AUs generated from the individual representation differences and other environmental impacts, we propose a dual-channel graph convolutional neural network (DGCN) obtaining both inherent and random AU relations. The first channel is FACS-based relation graph convolution channel (FACS-GCN) embedding prior knowledge of FACS, and it adjusts the network to the inherent AU dependent relations. The second channel is data-learning-based relation graph convolution channel (DLR-GCN) based on metric learning, and it provides robustness for individual differences and environmental changes. Comprehensive experiments have been conducted on three public datasets: CK+, RAF-AU and DISFA. The experimental results demonstrate that our proposed DGCN can extract the hidden relations well, thereby achieving great performance in AU recognition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call