Abstract

Image similarities is a useful concept regarding to the image retrieval on the basis of visual content of the images (CBIR - Content Based Image Retrieval). Because an image can have far more interpretations than text, visual similarity can be totally different from semantic similarity. We have developed similar images searching tools using global approaches as well as local approaches to find near similar images. In this paper we propose a method of bridging local and global levels, what should solve the problem of limited, non-adaptable dictionary when we use automatic annotations in a similar images retrieving task. Our faraway goal is to face the difficult problem with all current approaches to CBIR systems, connected with visual similarity: the semantic gap between low-level content and higher-level concepts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call