Abstract

Hash-based learning has attracted considerable attention due to its fast retrieval speed and low computational cost for the large-scale database. Compared with unsupervised hashing, supervised hashing achieves higher retrieval accuracy generally by leveraging supervised information. Most existing supervised hashing methods, such as supervised discrete hashing (SDH) and fast SDH (FSDH), are concerned more with the centralized setting. SDH regresses the hash code to its corresponding label, rather FSDH regressing each label to its corresponding hash code. However, in many realistic applications, large amounts of data are usually distributed across different sites. Thus, supervised distributed hashing (SupDisH), which is based on the distributed framework and supervised learning, has been proposed and liberates the limitations of centralized hashing. In this paper, based on FSDH, we propose the distributed fast supervised discrete hashing (DFSDH), which both inherits the excellent retrieval performance of SupDisH and gets significant enhancement in efficiency. Specifically, FSDH is introduced into a distributed framework, in which the centralized hash learning model is shared by all agents. Meanwhile, consistency constraints are introduced to ensure that multiple agents deal with distributed hash learning in parallel. For each agent, an alternate iterative procedure is employed to obtain high-quality binary codes and hashing function. The extensive experiments demonstrate that DFSDH is competitive to most centralized supervised hashing methods and existing distributed hashing methods.

Highlights

  • As one of methods been proposed for approximate nearest neighbor search, hashing has been widely studied and used for various large-scale applications, including object recognition, computer vision, image retrieval, and related areas [1]–[4], for several decades

  • Experiments are executed on two image datasets CIFAR-10 [26] and NUSWIDE [27] to evaluate the performance of distributed fast supervised discrete hashing (DFSDH) in terms of training time, average precision and mean average precision (MAP)

  • While in terms of training time, DFSDH is obviously better than supervised distributed hashing (SupDisH), and the advantage of which will be more remarkable with the increasing of code lengths

Read more

Summary

INTRODUCTION

As one of methods been proposed for approximate nearest neighbor search, hashing has been widely studied and used for various large-scale applications, including object recognition, computer vision, image retrieval, and related areas [1]–[4], for several decades. Data-independent hashing methods do not use training data to learn binary codes and hash functions. Most works are concerned more about datadependent hashing methods, which make full use of the training data to learn compact hash codes and hash functions. Due to the instability of algorithms and high time complexity, these methods have a little inferiority either in retrieval precision or in training time This inspires the study of the following work. The proposed method regresses label information to hash code on a single agent and adds nonlinear embedding on local objective functions to ensure that the binary codes belonging to the same class are different. We regress label information to binary code in distributed hashing learning. At the step of updating local projection matrix, the method of regressing label information to learn binary code has decreased the computation complexity. Identity matrix with a size of r × r is denoted by Ir

HASH FUNCTION
FORMULATION OF DISTRIBUTED FAST SUPERVISED
DISTRIBUTED LEARNING AND OPTIMIZATION
COMPLEXITY ANALYSIS
COMMUNICATION COMPLEXITY
EXPERIMENTS
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.