Abstract
Deep hashing has been widely applied in large-scale image retrieval due to its robust retrieval performance and high efficiency in computation/storage. Most deep supervised hashing methods exploit pairwise sampling or triplet sampling for optimization. However, these methods can only leverage the intrinsic structure information of a small subset of images, resulting in a suboptimal retrieval space for deep hashing methods. For addressing this limitation, we propose a novel deep multi-negative supervised hashing (DMNSH) method whose basic idea is to sample a positive and multiple negatives for an anchor, in order to leverage more structure and supervise information during the training process. For improving the training efficiency of the convolutional neural networks (CNN) for large-scale image retrieval, the DMNSH method adopts a mini-batch optimization strategy. A sample reusing strategy is proposed to construct multi-negative tuples efficiently with limited training images in the mini-batch during each round of optimization. To perform multi-negative learning, we further design a multi-negative loss function in which hashing codes are relaxed as CNN output features. By minimizing this multi-negative loss function, the similarity semantics of images are preserved, and the quantization errors of relaxing are minimized. For making the optimization process more stable, an adaptive margin is further incorporated into the loss function to improve the retrieval performance. The stochastic gradient descent and backpropagation strategies are employed to optimize the CNN parameters. Experimental results on three popular deep hashing datasets demonstrate that DMNSH significantly outperforms existing state-of-the-art hashing methods in terms of both precision and efficiency.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have