Abstract

In order to optimize a binary hash code with deep neural networks, greedy optimization with the discrete constraint relaxation is widely used due to its efficiency. However, noises from skipping gradients may destabilize the optimization in contrastive learning since the noises are superimposed on embedding from two distorted views. To reduce such undesirable impact from the noises, we propose a novel soft-to-hard hashing method by introducing an auxiliary loss in the penultimate block of the model based on the information bottleneck principle, which not only propagates the correlations between positives but also makes the embedding to be non-redundant to the samples in a high-dimensional continuous space. Our method beats the state-of-the-art result on three public datasets and demonstrates how soft code can help greedy back-propagation find a better solution during optimization. Another benefit of our method is that it enables a joint training of unsupervised representational learning and hash code generation, and obtains 5.4% mAP gain over conventional step-by-step training on the CIFAR-10 dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call