Abstract

Automatic Facial Action Unit (AU) detection from videos increases numerous interests over the past years due to its importance for analyzing facial expressions. Many proposed methods face challenges in detecting sparse face regions for different AUs, in the fusion of temporal dependency, and in learning multiple AUs simultaneously. In this paper, we propose a novel deep neural network architecture for AU detection to model above-mentioned challenges jointly. Firstly, to capture the region sparsity, we design a region pooling layer after a fully convolutional network to extract per-region features for each AU. Secondly, in order to integrate temporal dependency, Long Short Term Memory (LSTM) is stacked on the top of regional features. Finally, the regional features and outputs of LSTMs are utilized together to produce per-frame multi-label predictions. Experimental results on three large spontaneous AU datasets, BP4D, GFT and DISFA, have demonstrated our work outperforms state-of-the-art methods. On three datasets, our work has highest average F1 and AUC scores with an average F1 score improvement of 4.8% on BP4D, 12.7% on GFT and 14.3% on DISFA, and an average AUC score improvement of 27.4% on BP4D and 33.5% on DISFA.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call