Abstract

Hashing methods have attracted much attention due to their superior time and storage properties for image retrieval. To learn similarity-preserving hash function, most existing methods are designed for the centralized setting. However, the current data storage systems are distributed to increase scalability. Obviously, it is infeasible to aggregate all the data into a fusion center because of the prohibitively expensive communication and computation overhead. Motivated by this, some methods are proposed to achieve hashing for distributed data. However, these methods mostly focus on extending one specific hashing to a distributed model without considering the generality. In this letter, we propose a novel general distributed hash learning model, which can be viewed as an effective distributed model of most hashing methods. The proposed model can achieve up to 15.2% accuracy gains over state-of-the-art distributed hashing methods, while the communication cost is independent on the data size.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call