Abstract

AbstractIn the field of landmark detection based on deep learning, most of the research utilise convolutional neural networks to represent landmarks, and rarely adopt Transformer to represent and encode landmarks. Meanwhile, many works focus on modifying the network structure to improve network performance, and there is little research on the distribution of landmarks. In this article,the authors propose an unsupervised model to extract landmarks of objects in images. First, Transformer structure is combined with the convolutional neural network structure to represent and encode the landmarks; next, positive and negative sample pairs between landmarks are constructed, so that the semantically consistent landmarks on the image are pulled closer in the feature space and the semantically inconsistent landmarks are pushed farther in the feature space; then the authors concentrate their attention on the most active points to distinguish the landmarks of an object from the background; finally, based on the new contrastive loss, the network reconstructs the image by the landmarks of the object that are continuously learnt during training. Experiments show that the proposed model achieves better performance than other unsupervised methods on the CelebA, Annotated Facial Landmarks in the Wild, 300W datasets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.