Abstract

Blending free annotation cost, low memory usage and high query speed into a unity, unsupervised deep hashing has shown extraordinary talents in image retrieval. In order to address the lack of semantic information in unsupervised scenario, related works leverage models pretrained on large-scale datasets (e.g., ImageNet) to the estimate semantic similarity for specific datasets (e.g., FLICKR-25 K). However, most of them discard the original semantic attribute in the pretrained datasets, which results in hash codes being over-fitted to the estimated semantic similarity information. To alleviate this problem, this paper tries to consider the semantic similarity of the specific dataset and preserve the semantic attribute from the pretrained dataset simultaneously. Meanwhile, to build a reliable semantic similarity matrix for the specific dataset, this paper develops a similarity distiller by jointly measuring the semantic similarity with the distance between deep features, the distance on the local manifold and the distance on the connectivity graph. Moreover, an efficient attribute preserver is also designed to maintain the correspondence between the hash codes and the category attributes from the pretrained dataset based on regeneration criteria. The method proposed in this paper is named PSIDP for simplicity and extensive image retrieval experiments on four benchmark datasets demonstrate the superiority of the proposed PSIDP method when compared with other state-of-the-art unsupervised deep hashing methods. The code is available at https://github.com/reresearcher/PSIDP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call