Abstract

Content-based image retrieval (CBIR) consists in searching the most similar images to the query content from a given pool of images or database. Existing works' success relies on taking advantage of both local and global feature information leading to better retrieval performance than when using either of these. Lately, CBIR area has been dominated by the two-stage image retrieval framework which utilizes global features to get initial retrieval results, while using local features for reranking in a second stage. In this study, instead of utilizing local and global features separately during two stages, we propose to use a dot-product based local and global (DPLG) feature fusion module leading to a comprehensive global feature descriptor. The proposed fusion module is jointly end-to-end trained within the convolution backbone structure. According to the experimental results, the proposed module achieves new state-of-the-art results on some benchmark datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call