Abstract

Facial landmark detection objects to locate some predefined points on human face images. Compared with previous method, the accuracy of facial landmark detection has a great improvement. However, there are still many problems need to be solved in the field of facial landmark detection, such as the impact of environment (extreme pose, occlusion conditions and blur) on detection accuracy and the balance between model size with accuracy. In order to solve the impact of these problems, we introduce an approach for this task through data augmentation and cascade regression. By using AugNet, based on image disentangled representations and generative adversarial networks, we could map a human face image into content space and style space. Then rebuild a set of images that the locations of facial landmarks could be controlled. In this paper, we suppose that facial landmarks, edges and other semantic information in human face are charged by content space, whereas textures and colors mainly refer to style space. At the same time, we propose a multi-stage cascaded facial landmark detection, known as GLNet, to tradeoff between the model size and accuracy. GLNet localizes facial landmarks with coarse-to-fine search. GLNet would be optimized with augmented dataset that generated by AugNet. Besides, we conduct several experiments in benchmark datasets 300W, WFLW and AFLW. Extensive experimental results demonstrate that our approach outperforms competing methods. Specifically, our proposed facial landmark detector with tiny model size and high precision is suitable for real-time applications.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.