Abstract

Due to its low storage cost and fast query speed, hashing has been widely applied to approximate nearest neighbor search for large-scale image retrieval, while deep hashing further improves the retrieval quality by learning a good image representation. However, existing deep hash methods simplify multi-label images into single-label processing, so the rich semantic information from multi-label is ignored. Meanwhile, the imbalance of similarity information leads to the wrong sample weight in the loss function, which makes unsatisfactory training performance and lower recall rate. In this paper, we propose Deep Multi-Label Hashing (DMLH) model that generates binary hash codes which retain the semantic relationship of multi-label of the image. The contributions of this new model mainly include the following two aspects: (1) A novel sample weight calculation model adaptively adjusts the weight of the sample pair by calculating the semantic similarity of the multi-label image pairs. (2) The sample weight cross-entropy loss function, which is designed according to the similarity of the image, adjusts the balance of similar image pairs and dissimilar image pairs. Extensive experiments demonstrate that the proposed method can generate hash codes which achieve better retrieval performance on two benchmark datasets, NUS-WIDE and MS-COCO.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call