Abstract

The core of a content-based image retrieval (CBIR) system is based on an effective understanding of the visual contents of images due to which a CBIR system can be termed as accurate. One of the most prominent issues which affect the performance of a CBIR system is the semantic gap. It is a variance that exists between low-level patterns of an image and high-level abstractions as perceived by humans. A robust image visual representation and relevance feedback (RF) can bridge this gap by extracting distinctive local and global features from the image and by incorporating valuable information stored as feedback. To handle this issue, this article presents a novel adaptive complementary visual word integration method for a robust representation of the salient objects of the image using local and global features based on the bag-of-visual-words (BoVW) model. To analyze the performance of the proposed method, three integration methods based on the BoVW model are proposed in this article: (a) integration of complementary features before clustering (called as non-adaptive complementary feature integration), (b) integration of non-adaptive complementary features after clustering (called as a non-adaptive complementary visual words integration), and (c) integration of adaptive complementary feature weighting after clustering based on self-paced learning (called as a proposed method based on adaptive complementary visual words integration). The performance of the proposed method is further enhanced by incorporating a log-based RF (LRF) method in the proposed model. The qualitative and quantitative analysis of the proposed method is carried on four image datasets, which show that the proposed adaptive complementary visual words integration method outperforms as compared with the non-adaptive complementary feature integration, non-adaptive complementary visual words integration, and state-of-the-art CBIR methods in terms of performance evaluation metrics.

Highlights

  • Due to a staggering increase in globalization, communication, and advancement in technology, the world has become a global village in its true sense

  • 4.2 Precision The accuracy of a content-based image retrieval (CBIR) system in retrieving relevant images according to the visual contents of a query image is evaluated by precision (P), which is a ratio of images retrieved as relevant over total retrieved images

  • 5 Conclusion and future work In this article, we explored the effect of adaptive feature weighting and adaptive fuzzy k-means clustering on the robust representation of the principal objects of the images by integrating complementary visual words of the local and global features based on the BoVW methodology

Read more

Summary

Introduction

Due to a staggering increase in globalization, communication, and advancement in technology, the world has become a global village in its true sense. Traditional text-based approaches retrieve images based on information that is annotated manually, which has become impractical for such huge image repositories [1]. CBIR has been a rapidly progressing area since 1990, and it retrieves images having similar contents/features, i.e., colors, shapes, and textures It is categorized into two stages: (1) feature extraction and (2) feature matching. Visual similarity between semantically different objects is an intriguing issue that results in misclassification of an object, which affects the overall performance of the CBIR system. Another barrier for the retrieval system is an accurate feature matching. The research concern of today is to lessen the semantic gap concerning the images’ low-level visual features and user high-level semantics to improve the accuracy of image retrieval systems

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.