Abstract

It is challenging to track multiple facial features simultaneously when rich expressions are presented on a face. We propose a two-step solution. In the first step, several independent condensation-style particle filters are utilized to track each facial feature in the temporal domain. Particle filters are very effective for visual tracking problems; however multiple independent trackers ignore the spatial constraints and the natural relationships among facial features. In the second step, we use Bayesian inference—belief propagation—to infer each facial feature's contour in the spatial domain, in which we learn the relationships among contours of facial features beforehand with the help of a large facial expression database. The experimental results show that our algorithm can robustly track multiple facial features simultaneously, while there are large interframe motions with expression changes.

Highlights

  • Multiple facial feature tracking is very important in the computer vision field: it needs to be carried out before videobased facial expression analysis and expression cloning

  • Equation (8) means that the product is effectively the posterior probability of xti conditioned on yti and {Yti−1}, and this shares the same idea with the condensation algorithm. This property is important because it allows us to firstly run the particle filter to track each facial feature in one time step, and the output of particle filter is naturally fitted into a loopy belief propagation process (see (6) and (7))

  • We extend the particle filter from the relatively simple Markov chain to the directed-cum-undirected graphical model applied to multiple facial feature tracking problem

Read more

Summary

INTRODUCTION

Multiple facial feature tracking is very important in the computer vision field: it needs to be carried out before videobased facial expression analysis and expression cloning. Particle filters are often very effective for visual tracking problems, they are specialized to temporal problems whose corresponding graphs are simple Markov chains (see Figure 1). The contribution of this paper is extending particle filters to track multiple facial features simultaneously. The straightforward approach of tracking each facial feature by one independent particle filter is questionable, because influences and actions among facial features are not taken into account. We propose a spatio-temporal graphical model for multiple facial feature tracking (see Figure 2). Nonparametric belief propagation is used to infer facial feature’s interrelationships in a part-based face model, allowing positions and states of some features in clutter to be recovered. Every facial feature forms a Markov chain (see Figure 1).

RELATED WORK
MULTIPLE FACIAL FEATURE TRACKING BY PARTICLE FILTER
Why several particle filters?
Loopy belief propagation
Particle filter itself is not enough
Belief propagation in spatio-temporal graphical model
Particle propagation in spatio-temporal graphical model
Learning the correlation function
Optimizing Bayesian inference for Markov network
EXPERIMENTAL RESULTS
CONCLUSIONS
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call