Abstract

The current popular Sina microblog application has better support for text retrieval, but limited support for image retrieval. We try to build a cross-modal data set based on the characteristics of Sina microblog data, design a cross-modal retrieval method aiming at luxury official microblog, and implement cross-modal luxury microblog retrieval. In response to the lack of image retrieval on Weibo, a cross-modal graphic information retrieval is constructed in this article, and a cross-modal retrieval and identification method for luxury microblogs based on deep learning is proposed. The retrieval model in this article is composed of a convolutional neural network and a TF-IDF model. The convolutional neural network is used to extract the features of Weibo pictures, and the similarity measurement method is used in the high-level semantic space to perform feature matching to achieve cross-modal retrieval. This research has important theoretical guiding significance for improving the design of microblog multi-modal data retrieval, and has high engineering application value for the design, realization and improvement of future Sina Weibo retrieval methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.