Abstract

Matrix factorization has been utilized for the task of cross-view hashing, where basis functions are learned to map data from different views to the same hamming embedding. It is possible that the basis functions between the hamming embedding and the original data matrix contain rather complex hierarchical information, which existing work can not capture. In addition, previous work employs relaxation technique in the matrix factorization based hashing which may lead to large quantization error. To address these issues, this paper presents a novel Supervised Discrete Deep Matrix Factorization (SDDMF) for cross-view hashing. We introduce deep matrix factorization so that SDDMF is able to learn a set of hierarchical basis functions and unified binary codes from different views. In addition, a classification error term is incorporated into the objective to learn discriminative binary codes. We then employ a linearization technique to directly optimize the discrete constraints which can significantly reduce the quantization error. Experimental results on three standard datasets with image-text modalities verify that SDDMF significantly outperforms several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call