Abstract

Recently, unsupervised deep hashing has attracted increasing attention, mainly because of its potential ability to learn binary codes without identity annotations. However, because the labels are predicted by their pretext tasks, unsupervised deep hashing becomes unstable when learning with noisy labels. To mitigate this issue, we propose a simple but effective approach to self-supervised hash learning based on dual pseudo agreement. By adding a consistency constraint, our method can prevent corrupted labels and encourage generalization for effective knowledge distillation. Specifically, we use the refined pseudo labels as a stabilization constraint to train hash codes, which can implicitly encode semantic structures of the data into the learned Hamming space. Based on the stable pseudo labels, we propose a self-supervised hashing method with mutual information and noise contrastive loss. Throughout the process of hash learning, the stable pseudo labels and data distributions collaboratively work together as teachers to guide the binary codes learning process. Extensive experiments on three publicly available datasets demonstrate that the proposed method can consistently outperform state-of-the-art methods by large margins.

Highlights

  • Hash learning [1]–[5] aims to accelerate the retrieval system’s speed when querying an image from a large-scale images database

  • The ablation studies clearly demonstrate the reliability of dual pseudo agreement and contrastive loss

  • If we can estimate the similarity matrix S more accurately, we can further improve the accuracy of unsupervised deep hashing

Read more

Summary

INTRODUCTION

Hash learning [1]–[5] aims to accelerate the retrieval system’s speed when querying an image from a large-scale images database. State-of-the-art unsupervised deep hashing methods [14]–[19] for image retrieval always use a pretext task to learn representations on unlabeled data. We propose a simple but effective dual pseudo agreement labelling paradigm, which allows us to learn deep binary codes robustly even with extremely noisy labels. This conclusion is almost trivial from a statistical perspective, but in practice, it significantly eases learning hash codes by lifting requirements on the availability of clean data. The ablation studies clearly demonstrate the reliability of dual pseudo agreement and contrastive loss

RELATED WORK
DUAL PSEUDO AGREEMENT
CONTRASTIVE SELF-SUPERVISED HASHING
EXPERIMENTS
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.