<span lang="EN-US">In the field of digital image processing, content-based image retrieval (CBIR) has become essential for searching images based on visual content characteristics like color, shape, and texture, rather than relying on text-based annotations. To address the increasing demands for efficiency and precision in CBIR systems, we introduce the HybridEnsembleNet methodology. HybridEnsembleNet combines deep learning algorithms with an asymmetric retrieval framework to optimize feature extraction and comparison in extensive image databases. This novel approach, specifically custom-made for CBIR, employs a lightweight query structure skilled at handling large-scale data under resource-constrained environments. The experiments were performed on the ROxford and RParis datasets. The deep learning component of HybridEnsembleNet significantly refines the accuracy of image matching and retrieval. RParis The ROxford dataset, specifically in the medium and hard difficulty benchmarks, demonstrates an enhancement of 5.53% and 10.44%, respectively. Similarly, the RParis dataset, under medium and hard benchmarks, exhibits improvements of 3.01% and 5.83%, showcasing superior performance compared to existing models. By overcoming the traditional limitations of CBIR systems in mean average precision (mAP) metrics, HybridEnsembleNet provides a scalable, efficient, and more accurate solution for retrieving relevant images from vast digital libraries.</span>
Read full abstract