Abstract

Facial appearance changes in video sequences represent a nonstationary data problem, because of factors such as variation in pose, illumination and facial expressions. While most algorithm, that employ fixed appearance models of the target object, are not robust to track objects in uncontrolled environments. Existing Adaptive Appearance Models ( AAMs) approaches solve this problem to an extent. However, they do not adequately track facial feature points, such as those relating to the eyes or mouth in the presence of significant expression changes. In this paper, we propose a method to combine an online and an offline learning for robust tracking of facial feature points. Our method firstly estimates facial feature points globally with a stochastic approach which allows to escape from local minimum. We then refine the feature points with a deterministic approach. The tracked results are filtered by offline learning approach to ensure rejection of poorly aligned targets. This allows the proposed tracker to significantly improves robustness against appearance changes and occlusions. Experiment results on tracking facial feature points in long video sequences with a wide range of facial expressions in head movement demonstrate the effectiveness and robustness of our tracker.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.