Abstract

In recent years, with the continuous growth of multimedia data on the Internet, multimodal hashing has attracted increasing attention for its efficiency in large-scale cross-modal retrieval. Typically, most existing multimodal hashing methods are batch-based methods that cannot deal with the growing streaming data. Online multimodal hashing adopts online learning strategy to learn hash models incrementally, which can process large-scale streaming data. However, existing supervised online multimodal hashing methods still suffer from several limitations: (1) most methods cannot update hash codes of old data when hash model changes, which will hinder the retrieval accuracy. (2) the discrete optimizations of most methods for learning binary hash codes are less efficient or less effective. To address the above limitations, an efficient supervised online multimodal hashing method termed Online Adaptive Supervised Hashing (OASH) is proposed in this paper. OASH regresses class labels and training data to binary hash codes to learn discrete hash codes and hash functions with an efficient online optimization scheme. It learns hash functions incrementally with newly arriving data and updates hash codes with the latest hash model. By adaptively updating hash functions and hash codes with new data rather than accessing to old data, it gains much performance enhancement and computation savings. Extensive experiments on three benchmark data sets validate the superiority of OASH over state-of-the-art methods in both accuracy and efficiency.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call