Abstract

Social media platforms such as Twitter, Facebook, and Flicker, and the evolution of digital image capturing devices have resulted in the generation of a massive number of images. Thus, we experienced an exponential growth in digital image repositories in the last decade. Content-based image retrieval (CBIR) has been extensively employed to reduce the dependency on textual annotations for image searching. Effective feature descriptor is mandatory to retrieve the most relevant images from the repository. Additionally, CBIR methods often experience the semantic gap problem, which must also be addressed. In this paper, we propose a novel texture descriptor, Directional Magnitude Local Hexadecimal Patterns (DMLHP), based on the texture orientation and magnitude to retrieve the most relevant images. The objective of the proposed feature descriptor is to examine the relationship between the neighboring pixels and their adjacent neighbors based on texture orientation and magnitude. Our DMLHP texture descriptor is capable of capturing the texture and semantic information of the images effectively with the same visual content. Furthermore, the proposed method employs a learning-based approach to lessen the semantic gap problem and to improve the understanding of the contents of query images to retrieve the most relevant images. The presented descriptor provides remarkable results by achieving the average retrieval precision (ARP) of 66%, 92%, 83%, average retrieval recall (ARR) of 66%, 92%, 83%, average retrieval specificity (ARS) of 99%, 99%, 76%, and average retrieval accuracy (ARA) of 98%, 99%, 85% on the AT&T, MIT Vistex, and Brodatz Texture image repositories, respectively. Our experiments reveal that the proposed DMLHP descriptor achieves far better performance, i.e., 95% on AT&T, 92% on BT, and 99% on MIT Vistex, when used with a learning-based approach over a non-learning-based approach (similarity measure). Experimental results show that the proposed texture descriptor outperforms state-of-the-art descriptors such as LNIP, LTriDP, LNDP, LDGP, LEPSEG, and CSLBP for CBIR.

Highlights

  • T HE tremendous evolution of digital cameras and the Internet has resulted in the generation of a massive amount of multimedia content over the last couple of decades

  • We evaluated our method on three standard image repositories, that is, AT&T [33], Brodatz Texture (BT) [34], and MIT Vistex [35]

  • The performance of the proposed descriptor is measured on three standard image repositories that are diverse in terms of pose variations, noise, occlusions, and a variety of natural and artificial regular textures

Read more

Summary

Introduction

T HE tremendous evolution of digital cameras and the Internet has resulted in the generation of a massive amount of multimedia content over the last couple of decades. It has been observed that the description of the texture images using the text becomes difficult at times because different users employ distinct keywords for annotation. This reveals the limitation of text descriptors being subjective, which results in low retrieval accuracy. To overcome the limitations of TBIR systems, researchers introduced the concept of a content-based image retrieval system. CBIR addresses the limitation of TBIR, as CBIR does not need manual annotation to retrieve visually similar images [2]. A CBIR system is based on the visual contents of the images described in the low-level features, that is, texture, shape, color, and spatial locations to build the feature repository. In the CBIR system, the image is provided as an input query instead of feeding the textual query

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.