Abstract

Evaluating the performance of Content-Based Image Retrieval (CBIR) systems is a challenging and intricate task, even for experts in the field. The literature presents a vast array of CBIR systems, each applied to various image databases. Traditionally, automatic metrics employed for CBIR evaluation have been borrowed from the Text Retrieval (TR) domain, primarily precision and recall metrics. However, this paper introduces a novel quantitative metric specifically designed to address the unique characteristics of CBIR. The proposed metric revolves around the concept of grouping relevant images and utilizes the entropy of the retrieved relevant images. Grouping together relevant images holds great value from a user perspective, as it enables more coherent and meaningful results. Consequently, the metric effectively captures and incorporates the grouping of the most relevant outcomes, making it highly advantageous for CBIR evaluation. Additionally, the proposed CBIR metric excels in differentiating between results that might appear similar when assessed using other metrics. It exhibits a superior ability to discern subtle distinctions among retrieval outcomes. This enhanced discriminatory power is a significant advantage of the proposed metric. Furthermore, the proposed performance metric is designed to be straightforward to comprehend and implement. Its simplicity and ease of use contribute to its practicality for researchers and practitioners in the field of CBIR. To validate the effectiveness of our metric, we conducted a comprehensive comparative study involving prominent and well-established CBIR evaluation metrics. The results of this study demonstrate that our proposed metric exhibits robust discrimination power, outperforming existing metrics in accurately evaluating CBIR system performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call