Abstract

In this paper, a new method for object-based image retrieval is proposed. The technique is designed to adaptively and efficiently locate salient blocks in images. Salient blocks are used to represent semantically meaningful objects in images and to perform object-oriented annotation and retrieval. An algorithm is proposed to locate the most suitable blocks of arbitrary size representing the query concept or object of interest in images. To annotate single objects according to human perception, associations between several low-level patterns and semantic concepts are modelled by an optimised multi-descriptor space. The approach starts by dividing the image into blocks partitioned according to several different layouts. Then, a fitting block is selected according to a similarity metric acting on concept-specific multi-feature spaces. The similarity metric is defined as linear combination of single feature space metrics for which the corresponding weights are learned from a group of representative salient blocks using multi-objective optimisation. Relevance Feedback is seamlessly integrated in the retrieval process. In each iteration, the user selects images relevant to the query object, then the corresponding salient blocks in selected images are used as training examples. The proposed technique was thoroughly assessed and selected results are reported in this paper to demonstrate its performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call