Abstract

Although studied for decades, constructing effective image retrieval remains an open problem in a wide range of relevant applications. Impressive advances have been made to represent image content, mainly supported by the development of Convolution Neural Networks (CNNs) and Transformer-based models. On the other hand, effectively computing the similarity between such representations is still challenging, especially in collections in which images are structured in manifolds. This paper introduces a novel solution to this problem based on dimensionality reduction techniques, often used for data visualization. The key idea consists in exploiting the spatial relationships defined by neighbor embedding data visualization methods, such as t-SNE and UMAP, to compute a more effective distance/similarity measure between images. Experiments were conducted on several widely-used datasets. Obtained results indicate that the proposed approach leads to significant gains in comparison to the original feature representations. Experiments also indicate competitive results in comparison with state-of-the-art image retrieval approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call