Abstract

Similarity-preserving hashing has become the mainstream of approximate nearest neighbor (ANN) search for large-scale image retrieval. Recent research shows that deep neural networks can produce efficient feature representation. Most existing deep hashing schemes simply utilize the middle-layer features of the deep neural networks to measure the similarity between query images and database images. However, these visual features are suboptimal for discriminating the semantic information of images, especially for complex images that contain multiple objects. In this paper, a deep framework is employed to learn multi-level non-linear transformations to obtain advanced image features, and then we combine these intermediate features and top layer visual information to implement image retrieval. Three criterions are enforced on these compact codes: (1) minimal quantization loss; (2) evenly distributed binary; (3) independent bits. The experimental results on five public large-scale datasets demonstrate the superiority of our method compared with several other state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.