Abstract

Hashing has been widely exploited in recent years due to the rapid growth of image and video data on the web. Benefiting from recent advances in deep learning, deep hashing methods have achieved promising results with supervised information. However, it is usually expensive to collect the supervised information. In order to utilize both labeled and unlabeled data samples, many semi-supervised hashing methods based on Generative Adversarial Networks (GANs) have been proposed. Most of them still need the conditional information, which is usually generated by the pre-trained neural networks or leveraging random binary vectors. One natural question about these methods is that how can we generate a better conditional information given the semantic similarity information? In this paper, we propose a general two-stage conditional GANs hashing framework based on the pairwise label information. Both the labeled and unlabeled data samples are exploited to learn hash codes under our framework. In the first stage, the conditional information is generated via a general Bayesian approach, which has a much lower dimensional representation and maintains the semantic information of original data samples. In the second stage, a semi-supervised approach is presented to learn hash codes based on the conditional information. Both pairwise based cross entropy loss and adversarial loss are introduced to make full use of labeled and unlabeled data samples. Extensive experiments have shown that the propose algorithm outperforms current state-of-the-art methods on three benchmark image datasets, which demonstrates the effectiveness of our method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call