Abstract

Hashing methods have sparked great attention on multimedia tasks due to their effectiveness and efficiency. However, most existing methods generate binary codes by relaxing the binary constraints, which may cause large quantization error. In addition, most supervised cross-modal approaches preserve the similarity relationship by constructing an n×n large-size similarity matrix, which requires huge computation, making these methods unscalable. To address the above challenges, this article presents a novel algorithm, called scalable discrete matrix factorization and semantic autoencoder method (SDMSA). SDMSA is a two-stage method. In the first stage, the matrix factorization scheme is utilized to learn the latent semantic information, the label matrix is incorporated into the loss function instead of the similarity matrix. Thereafter, the binary codes can be generated by the latent representations. During optimization, we can avoid manipulating a large n×n similarity matrix, and the hash codes can be generated directly. In the second stage, a novel hash function learning scheme based on the autoencoder is proposed. The encoder-decoder paradigm aims to learn projections, the feature vectors are projected to code vectors by encoder, and the code vectors are projected back to the original feature vectors by the decoder. The encoder-decoder scheme ensures the embedding can well preserve both the semantic and feature information. Specifically, two algorithms SDMSA-lin and SDMSA-ker are developed under the SDMSA framework. Owing to the merit of SDMSA, we can get more semantically meaningful binary hash codes. Extensive experiments on several databases show that SDMSA-lin and SDMSA-ker achieve promising performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call