Abstract

Many of the available image databases have keyword annotations associated with the images. In spite of the availability of good quality low-level visual features that reflect well the physical content, image retrieval based on visual features alone is subject to semantic gap. Text annotations are related to image context or semantic interpretation of the visual content and are not necessarely directly linked to the visual appearance of the images. Keywords and visual features thus provide complementary information. Using both sources of information is an advantage in many applications and recent work in this area reflects this interest. In this paper, we address the challenge of semantic gap reduction using a hybrid visual and conceptual representation of the content within an active relevance feedback context. We introduce a new feature vector, based on the keyword annotations available for the images, which makes use of conceptual information extracted from an external lexical database, information represented by a set of core concepts. Our experiments show that the use of the proposed hybrid conceptual and visual feature vector dramatically improves the quality of the relevance feedback results.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call