Abstract

Abstract: This paper introduces a new reverse search engine integration into content-based image retrieval (CBIR) systems that employs convolutional neural networks (CNNs) for feature extraction. It generates global descriptors using pre-trained CNN architectures such as ResNet50, InceptionV3, and InceptionResNetV2. It retrieves visually similar images without depending on linguistic annotations. Comparative analysis against existing methods, such as Gabor Wavelet, CNN-SVM, Metaheuristic Algorithm, etc., has been tested, and it proves the superiority of the proposed algorithm, the Cartoon Texture Algorithm, in CBIR. As the Internet sees an exponential growth of different data types, the importance of CBIR continues to grow. In order to efficiently retrieve images, solely relying on image features while ignoring metadata is exactly what we need. As such, this paper is a reminder of the need for CBIR in this changing world. They showed that CBIR continues to be quite effective in the age of the Internet. Their proposed model for CBIR, which integrates ResNet-50-based feature extraction, a neural network model trained on different image datasets, and clustering techniques to make retrieval fast, provides a significant improvement in accuracy and efficiency for content-dependent image retrieval. This methodology is likely to be very useful as we work with the increasingly huge data of vision and beyond on the Internet. It provides a good basis for an effective image search and retrieval system

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call