Abstract

The inherent dependencies among facial action units (AU) caused by the underlying anatomic mechanism are essential for the proper recognition of AUs and estimation of intensity levels, but they have not been exploited to their full potential. We are proposing novel methods to recognize AUs and estimate intensity via hybrid Bayesian networks. The upper two layers are latent regression Bayesian networks (LRBNs), and the lower layers are Bayesian networks (BNs). The visible nodes of the LRBN layers are representations of ground-truth AU occurrences or AU intensities. Through the directed connections from latent layer and visible layer, an LRBN can successfully represent relationships between multiple AUs or AU intensities. The lower layers include Bayesian networks with two nodes for AU recognition, and Bayesian networks with three nodes for AU intensity estimation. The bottom layers incorporate measurements from facial images with AU dependencies for intensity estimation and AU recognition. Efficient learning algorithms of the hybrid Bayesian networks are proposed for AU recognition as well as intensity estimation. Furthermore, the proposed hybrid Bayesian network models are extended for facial expression-assisted AU recognition and intensity estimation, as AU relationships are closely related to facial expressions. We test our methods on three benchmark databases for AU recognition and two benchmark databases for intensity estimation. The results demonstrate that the proposed approaches faithfully model the complex and global inherent AU dependencies, and the expression labels available only during training can boost the estimation of AU dependencies for both AU recognition and intensity estimation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call