Abstract

Supervised Descent Method (SDM) is one of the leading cascaded regression approaches for face alignment with state-of-the-art performance and a solid theoretical basis. However, SDM is prone to local optima and likely averages conflicting descent directions. This makes SDM ineffective in covering a complex facial shape space due to large head poses and rich non-rigid face deformations. In this paper, a novel two-step framework called multi-subspace SDM (MS-SDM) is proposed to equip SDM with a stronger capability for dealing with unconstrained faces. The optimization space is first partitioned with regard to shape variations using k-means. The generated subspaces show semantic significance which highly correlates with head poses. Faces among a certain subspace also show compatible shape-appearance relationships. Then, Naive Bayes is applied to conduct robust subspace prediction by concerning about the relative proximity of each subspace to the sample. This guarantees that each sample can be allocated to the most appropriate subspace-specific regressor. The proposed method is validated on benchmark face datasets with a mobile facial tracking implementation.

Highlights

  • Face alignment aims to automatically localize fiducial facial points

  • This paper proposes an efficient and novel alternative optimization subspace learning method – multi-subspace Supervised Descent Method (SDM) (MS-SDM), which pushes SDM to the unconstrained face alignment application

  • Given a face image I and initial facial landmarks’ coordinates x0, face alignment can be framed as minimizing the following function over Δx: f ðx0 þ ΔxÞ 1⁄4 khðx0 þ Δx; IÞ−hðx*; IÞk22

Read more

Summary

Introduction

Face alignment aims to automatically localize fiducial facial points (or landmarks) It is a fundamental step for many facial analysis tasks, e.g. facial recognition [19, 20], face frontalization [21, 22], expression recognition [11, 31], and face attributes prediction [7, 25]. The field of face alignment has witnessed rapid progresses in recent years, especially with the application and development of cascaded regression methods [2, 6, 27, 38, 39]. The approach is theoretically sound to some extent with rigorous explanation from the perspective of optimizing a nonlinear problem with Newton’s method

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.