Abstract

• The proposed model can achieve efficient and accurate cross-modal retrieval task. • The proposed model integrates data similarity and category information to improve performance. • The author validates the effectiveness of the proposed model through experiments on several public data sets. Due to the emergence and development of big data, cross-modal hash retrieval has become progressively more important in large-scale multi-modal retrieval tasks depending on its accuracy and efficiency. It completes the retrieval task in a common low-dimensional space by finding a common semantic space for heterogeneous data of different modalities. Recently, many works have concentrated on supervised cross-modal hashing and achieved higher retrieval accuracy. However, there are still many challenges in how to maintain the local geometric structure of the original space in the public space and how to use the supervision information efficiently. To deal with such issues, this paper proposes a hash retrieval method that incorporates supervised information based on matrix factorization (LCSMFH) by maintain the inter-modal and the intra-modal similarity in the original space and make the most of the label information to improve the retrieval task effect. Through experiments on two benchmark data sets, our method is more effective and outperforms state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call