Abstract

Most of the current information retrieval are based on keyword information appearing in the text or statistical information according to the number of vocabulary words. It is also possible to add additional semantic information by using synonyms, polysemous words, etc. to increase the accuracy of similarity and screening. However, in the current network, in addition to generate a large number of new words every day, pictures, audio, video and other information will appear too. Therefore, the manual features are difficult to express on this kind of newly appearing data, and the low-dimensional feature abstraction is very difficult to represent the overall semantics of text and images. In this paper, we propose a semantic feature extraction algorithm based on deep network, which applies the local attention mechanism to the feature generation model of pictures and texts. The retrieval of text and image information is converted into the similarity calculation of the vector, which improves the retrieval speed and ensures the semantic relevance of the result. Through the compilation of many years of news text and image data to complete the training and testing of text and image feature extraction models, the results show that the depth feature model has great advantages in semantic expression and feature extraction. On the other hand, add the similarity calculation to the training processing also improve the retrieval accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call