Abstract
The current popular Sina microblog application has better support for text retrieval, but limited support for image retrieval. We try to build a cross-modal data set based on the characteristics of Sina microblog data, design a cross-modal retrieval method aiming at luxury official microblog, and implement cross-modal luxury microblog retrieval. In response to the lack of image retrieval on Weibo, a cross-modal graphic information retrieval is constructed in this article, and a cross-modal retrieval and identification method for luxury microblogs based on deep learning is proposed. The retrieval model in this article is composed of a convolutional neural network and a TF-IDF model. The convolutional neural network is used to extract the features of Weibo pictures, and the similarity measurement method is used in the high-level semantic space to perform feature matching to achieve cross-modal retrieval. This research has important theoretical guiding significance for improving the design of microblog multi-modal data retrieval, and has high engineering application value for the design, realization and improvement of future Sina Weibo retrieval methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have