Abstract

Facial landmark detection in the wild remains a challenging problem in computer vision. Deep learning-based methods currently play a leading role in solving this. However, these approaches generally focus on local feature learning and ignore global relationships. Therefore, in this study, a self-attention mechanism is introduced into facial landmark detection. Specifically, a coarse-to-fine facial landmark detection method is proposed that uses two stacked hourglasses as the backbone, with a new landmark-guided self-attention (LGSA) block inserted between them. The LGSA block learns the global relationships between different positions on the feature map and allows feature learning to focus on the locations of landmarks with the help of a landmark-specific attention map, which is generated in the first-stage hourglass model. A novel attentional consistency loss is also proposed to ensure the generation of an accurate landmark-specific attention map. A new channel transformation block is used as the building block of the hourglass model to improve the model's capacity. The coarse-to-fine strategy is adopted during and between phases to reduce complexity. Extensive experimental results on public datasets demonstrate the superiority of our proposed method against state-of-the-art models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.