Abstract

The central problem of most Content Based Image Retrieval approaches is poor quality in terms of sensitivity (recall) and specificity (precision). To overcome this problem, the semantic gap between high-level concepts and low-level features has been acknowledged. In this paper we introduce an approach to reduce the impact of the semantic gap by integrating high-level (semantic) and low-level features to improve the quality of Image Retrieval queries. Our experiments have been carried out by applying two hierarchical procedures. The first approach is called keyword-content, and the second content-keyword. Our proposed approaches show better results compared to a single method (keyword or content based) in term of recall and precision. The average precision has increased by up to 50%.

Highlights

  • Most of Content-based Image Retrieval [1] approaches are aimed to find images that are semantically similar to a given query

  • Like most previous studies on most content based image retrieval, I calculated the effectiveness of the system in term of precision and recall

  • Precision or specificity is the ratio of the number of relevant images retrieved to the number of images retrieved, whilst recall or sensitivity is the ratio of the number of relevant images retrieved to the number of relevant images in the database

Read more

Summary

INTRODUCTION

Most of Content-based Image Retrieval [1] approaches are aimed to find images that are semantically similar to a given query (often a single example image) In this definition, semantically similar is meant in the sense of human visual perception and usually refers to high-level features. Content-based image retrieval (CBIR) uses low-level visual features to retrieve images. With this approach, it is unnecessary to annotate images and translate users’ queries. In contrast to the parallel approach, a pipeline approach uses textual or visual information to perform initial retrieval, and uses the other information to filter out the irrelevant images [7] In these two approaches, textual and visual queries are formulated by users and do not directly influence each other.

RELATED WORKS
THE EXPERIMENTAL RESULTS
CONCLUDING AND REMARKS
FUTURE WORK
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call