Abstract

Fast similarity search is becoming more and more critical given the ever growing sizes of datasets. Hashing approaches provide both fast search mechanisms and compact indexing structures to address this critical need. In image retrieval problems where labeled training data is available, supervised hashing methods prevail over un-supervised methods. However, most supervised hashing methods are batch-learners; this hinders their ability to adapt to changes as a dataset grows and diversifies. In this work, we propose an online supervised hashing technique that is based on Error Correcting Output Codes. Given an incoming stream of training data with corresponding labels, our method learns and adapts its hashing functions in a discriminative manner. Our method makes no assumption about the number of possible class labels, and accommodates new classes as they are presented in the incoming data stream. In experiments with three image retrieval benchmarks, the proposed method yields state-of-the-art retrieval performance as measured in Mean Average Precision, while also being orders-of-magnitude faster than competing batch methods for supervised hashing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call