Abstract

The effectiveness of image indexing and retrieval systems has been negatively impacted by both the semantic and user intention gaps. The first is related to the challenge of characterizing the visual semantic information through low-level extracted features (color, texture...) while the second highlights the difficulty for human users to convey their search intents using the traditional relevance feedback or query-by-example visual query mechanisms. We address both issues by highlighting vocabularies of visual concepts that are mapped to extracted low-level features through an automated learning paradigm. These are then instantiated within a semantic indexing and retrieval framework based on a Bayesian model considering the joint distribution of visual and semantic concepts. To address the user intention gap and enrich the expressiveness of the retrieval module, visual and semantic concepts can be coupled within text-based queries. We are therefore able to process both single-concept queries such as the state-of-the-art solutions but also topic-based queries, i.e. non-trivial queries involving multiple characterizations of the visual content. We evaluate our proposal in a precision/recall based evaluation framework on the IAPR-TC 12 benchmark dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.