Abstract
The goal of supervised hashing is to construct hash mappings from collections of images and semantic annotations such that semantically relevant images are embedded nearby in the learned binary hash representations. Existing deep supervised hashing approaches that employ classification frameworks with a classification training objective for learning hash codes often encode class labels as one-hot or multi-hot vectors. We argue that such label encodings do not well reflect semantic relations among classes and instead, effective class label representations ought to be learned from data, which could provide more discriminative signals for hashing. In this article, we introduce Adaptive Labeling Deep Hashing (AdaLabelHash) that learns binary hash codes based on learnable class label representations. We treat the class labels as the vertices of a K -dimensional hypercube, which are trainable variables and adapted together with network weights during the backward network training procedure. The label representations, referred to as codewords, are the target outputs of hash mapping learning. In the label space, semantically relevant images are then expressed by the codewords that are nearby regarding Hamming distances, yielding compact and discriminative binary hash representations. Furthermore, we find that the learned label representations well reflect semantic relations. Our approach is easy to realize and can simultaneously construct both the label representations and the compact binary embeddings. Quantitative and qualitative evaluations on several popular benchmarks validate the superiority of AdaLabelHash in learning effective binary codes for image search.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE transactions on neural networks and learning systems
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.