Abstract

Due to the storage and retrieval efficiency, hashing has been widely deployed to approximate nearest neighbor search for fast image retrieval on large-scale datasets. It aims to map images to compact binary codes that approximately preserve the data relations in the Hamming space. However, most of existing approaches learn hashing functions using the hand-craft features, which cannot optimally capture the underlying semantic information of images. Inspired by the fast progress of deep learning techniques, in this paper we design a novel Deep Graph Laplacian Hashing (DGLH) method to simultaneously learn robust image features and hash functions in an unsupervised manner. Specifically, we devise a deep network architecture with graph Laplacian regularization to preserve the neighborhood structure in the learned Hamming space. At the top layer of the deep network, we minimize the quantization errors, and enforce the bits to be balanced and uncorrelated, which makes the learned hash codes more efficient. We further utilize back-propagation to optimize the parameters of the networks. It should be noted that our approach does not require labeled training data and is more practical to real-world applications in comparison to supervised hashing methods. Experimental results on three benchmark datasets demonstrate that DGLH can outperform the state-of-the-art unsupervised hashing methods in image retrieval tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call