Abstract

The binary coding technique has been widely used in approximate nearest neighbors (ANN) search tasks. Traditional hashing algorithms treat binary bits equally, which usually causes an ambiguous ranking. To solve this issue, we propose an innovative bitwise weight method dubbed minimal residual ordinal loss hashing (MROLH). Different from a two-step mechanism, MROLH simultaneously learns binary codes and bitwise weights by a feedback mechanism. When the algorithm converges, the binary codes and bitwise weights can be well adaptive to each other. Furthermore, we establish the ordinal relation preserving constraint based on quartic samples to enhance the power of preserving relative similarity. To decrease the training complexity, we utilize a tensor ordinal graph to represent quartic ordinal relation, and the original objective function is approximated by the one based on triplet samples. In this paper, we also assign different weight values to training samples. During the training procedure, the weight of each data is initialized to the same value, and we iteratively boost the weight of the data whose relative similarity is not well preserved. As a result, we can minimize the residual ordinal loss. Experimental results on three large-scale ANN search benchmark datasets, i.e., SIFT1M, GIST1M, and Cifar10, show that the proposed method MROLH achieves a superior ANN search performance in both the Hamming space and the weighted Hamming space over the sate-of-the-art approaches.

Highlights

  • The aim of hashing algorithms [1,2,3,4,5,6] is to learn the binary representations of data which can preserve their original similarity relationship in the Hamming space

  • 2 Related work In this paper, we mainly focus on two issues: (a) How to preserve the original ordinal relation in the Hamming space and the weighted Hamming space. (b) How to guarantee bitwise weights and binary codes are well adaptive to each other

  • 4.1 Experimental setting In this paper, we evaluate the comparative experiments on three large datasets SIFT1M [18], GIST1M [19], and Cifar10 [20], which are widely used in approximate nearest neighbors (ANN) search experiments

Read more

Summary

Introduction

The aim of hashing algorithms [1,2,3,4,5,6] is to learn the binary representations of data which can preserve their original similarity relationship in the Hamming space. The data-independent hashing, such as locality-sensitive hashing (LSH) [7], randomly generates hashing functions, Wang et al EURASIP Journal on Image and Video Processing (2020) 2020:10 into the absolute similarity preserving hashing [10, 11] and the relative similarity preserving hashing [6, 12]. The former ones emphasize that the Hamming distances of similar data pairs should be minimal enough, and they are proper for the semantic neighbor search task. The relative similarity preserving hashing can achieve a better ANN search performance

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call