Abstract

Hashing has drawn more and more attention in image retrieval due to its high search speed and low storage cost. Traditional hashing methods project the high-dimensional hand-crafted visual features to compact binary codes by linear or non-linear hashing functions. Deep hashing methods, which integrate image representation learning and hash functions learning into a unified framework, have shown more superior performance. Most of existing supervised deep hashing methods mainly consider the semantic similarities among images by using pair-wise or triplet-wise constraints as supervision information. However, as a kind of crucial information, the rankings of the retrieval results, are neglected. Consequently, the produced hash codes may be suboptimal. In this paper, a new Deep Hashing with Top Similarity Preserving (DHTSP) method is proposed to optimize the quality of hash codes for image retrieval. Specifically, we utilize AlexNet to extract discriminative image representations directly from the raw image pixels and learn hash functions simultaneously. Then a top similarity preserving loss function is designed to preserve the similarity of returned images at the top of the ranking list. Experimental results on three benchmark datasets show that our proposed method outperforms most of state-of-the-art deep hashing methods and traditional hashing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call