Abstract
We propose a Low-Dimensional Deep Feature based Face Alignment (LDFFA) method to address the problem of face alignment “in-the-wild”. Recently, Deep Bottleneck Features (DBF) has been proposed as an effective channel to represent input with compact, low-dimensional descriptors. The locations of fiducial landmarks of human faces could be effectively represented using low dimensional features due to the large correlation between them. In this paper, we propose a novel deep CNN with a bottleneck layer which learns to extract a low-dimensional representation (DBF) of the fiducial landmarks from images of human faces. We pre-train the CNN with a large dataset of synthetically annotated data so that the extracted DBFs are robust across variations in pose, occlusions, and illumination. Our experiments show that the proposed approach demonstrates near real-time performance and higher accuracy when compared with state-of-the-art results on numerous benchmarks.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have