Abstract

Multimodal hashing methods have gained considerable attention in recent years due to their effectiveness and efficiency for cross-modal similarity searches. Existing multimodal hashing methods either learn unified hash codes for different modalities or learn individual hash codes for each modality and then explore cross-correlations between them. Generally, learning unified hash codes tends to preserve the shared properties of multimodal data and learning individual hash codes tends to preserve the specific properties of each modality. There remains a crucial bottleneck regarding how to learn hash codes that simultaneously preserve the shared properties and specific properties of multimodal data. Therefore, we present a joint and individual matrix factorization hashing (JIMFH) method, which not only learns unified hash codes for multimodal data to preserve their common properties but also learns individual hash codes for each modality to retain its specific properties. The proposed JIMFH learns unified hash codes by joint matrix factorization, which jointly factorizes all modalities into a shared latent semantic space. In addition, JIMFH learns individual hash codes by individual matrix factorization, which separately factorizes each modality into a modal-specific latent semantic space. Finally, unified hash codes and individual hash codes are combined to obtain the final hash codes. In this way, hash codes learned by JIMFH can preserve both the shared properties and specific properties of multimodal data, and therefore the retrieval performance is enhanced. Comprehensive experiments show that the proposed JIMFH performs much better than many state-of-the-art methods on cross-modal retrieval applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call