Abstract

Automatic Facial Action Unit (AU) detection has drawn more and more attention over the past years due to its significance to facial expression analysis. Frontal-view AU detection has been extensively evaluated, but cross-pose AU detection is a less-touched problem due to the scarcity of the related dataset. The challenge of Facial Expression Recognition and Analysis (FERA2017) just released a large-scale videobased AU detection dataset across different facial poses. To deal with this challenging task, we develop a simple and efficient deep learning based system to detect AU occurrence under nine different facial views. In this system, we first crop out facial images by using morphology operations including binary segmentation, connected components labeling and region boundaries extraction, then for each type of AU, we train a corresponding expert network by specifically fine-tuning the VGG-Face network on cross-view facial images, so as to extract more discriminative features for the subsequent binary classification. In the AU detection sub-challenge, our proposed method achieves the mean accuracy of 77.8% (vs. the baseline 56.1%), and promotes the F1 score to 57.4% (vs. the baseline 45.2%).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call