Abstract

In the era of big data, People’s lives are filled with all kinds of information. Scientific and technological information is utilized for scholars to understand the current technology trends, and to think about the source of information for future development prospects. More and more scholars are no longer satisfied with single-modal retrieval methods. However, to get more intelligent cross-media retrieval results we should give higher requirements to the search engine. And how to span the semantic gap between different modalities is a key issue that needs to be solved. In response to the above problems, this paper proposes a Multi-feature Fusion based Cross-Media Retrieval (MFCMR) method. Our method is capable of integrating multiple features to promote semantic understanding, and adopting adversarial learning to further improve the accuracy of public subspace representation. Then we use similarity in the same space to sort the retrieval results. We conduct a lot of experiments on real datasets, and the results show that our method obtains better cross-media retrieval performance than other methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call