Abstract

Deep learning based hashing methods have been proven to be effective in the field of image retrieval recently. Among them, most high-performance methods are supervised frameworks, which require annotated labels by humans. Considering the difficulty of labeling large-scale image datasets, unsupervised methods, which just need images themselves for training, are more suitable for practical applications. However, how to improve the discriminative ability of hash codes generated by unsupervised models still remains as a challenging problem. In this paper, we present a novel deep framework called Unsupervised Deep Triplet Hashing (UDTH) for scalable image retrieval. UDTH builds pseudo triplets based on the neighborhood structure in the high-dimensional visual feature space, and then solves two problems through the proposed objective function: 1) Triplet network is utilized to maximize the distance between different classes of binary representation; 2) Autoencoder and Binary quantization are exploited to learn hash codes which maintain the structural information of original samples. Extensive experiments on the datasets of CIFAR-10, NUS-WIDE and MIRFLICKR-25K are conducted, and the results show that our proposed UDTH is superior to the state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call