Abstract

State-of-the-art hashing methods, such as the kernelised locality-sensitive hashing and spectral hashing, have high algorithmic complexities to build the hash codes and tables. Our observation from the existing hashing method is that, putting two dissimilar data points into the same hash bucket only reduces the efficiency of the hash table, but it does not hurt the query accuracy. Whereas putting two similar data points into different hash buckets will reduce the correctness (i.e. query accuracy) of a hashing method. Therefore, it is much more important for a good hashing method to ensure that similar data points have high probabilities to be put to the same bucket, than considering those dissimilar data-point relations. On the other side, attracting similar data points to the same hash bucket will naturally suppress dissimilar data points to be put into the same hash bucket. With this locality-preserving observation, we naturally propose a new hashing method called the locality-preserving hashing, which builds the hash codes and tables with much lower algorithmic complexity. Experimental results show that the proposed method is very competitive in terms of the training time spent for large data-sets among the state of the arts, and with reasonable or even better query accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call