Abstract

With the rapid growth of digital documents, big data puts higher demands on document image retrieval. Document image retrieval is the domain between classical information retrieval and content-based retrieval. The traditional document image retrieval method relies on complex OCR-based text recognition and text similarity detection. This paper proposes a new content-based retrieval method for document graphics objects. This method focuses on feature extraction, feature fusion, and indexing. This paper uses the pretrained convolutional neural network model to learn the image representation of the retrieval task, extracts various features of the document image, and then performs the PCA reduction on the extracted high-dimensional features, and then uses the improved Rank fusion method based on Rank_avg to form new features matrix. Transfer learning is used to fine-tune the trained CNN model and apply it to the retrieval algorithm, which can effectively deal with the deficiency of training data. Finally, the similarity of the features is used to sort, and the query index is established based on the inverted indexing technique of visual vocabulary. Experiments with document image datasets containing charts and texts show that this method has better ability to retrieve document images with similar text contents. The fusion of dimension-reduced CNN features can effectively improve MAP of the retrieval system. The MAP of model fusion with good performance can reach 0.85. The reverse indexing technology based on visual vocabulary can effectively reduce the retrieval time by 27%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call