Abstract

In this paper, we proposed the Full Direction Local Neighbor Pattern (FDLNP) algorithm, which is a novel method for Content-Based Image Retrieval. FDLNP consists of many steps, starting from generating Max and Min Quantizers followed by building two matrix types (the Eight Neighbors Euclidean Decimal Coding matrix, and Full Direction Matrixes). After that, we extracted Gray-Level Co-occurrence Matrix (GLCM) from those matrixes to derive the important features from each GLCM matrixes and finishing with merging the output of previous steps with Local Neighbor Patterns (LNP) histogram. For decreasing the feature vector length, we proposed five extension methods from FDLNP by choosing the specific direction matrixes. Our results demonstrate the effectiveness of our proposed algorithm on color and texture databases, comparing with recent works, with regard to the Precision, Recall, mean Average Precision (mAP), and Average Retrieval Rate (ARR). For enhancing the image retrieval accuracy, we proposed a novel framework that combined the image retrieval system with clustering and classification algorithms. Moreover, we proposed a distributed model that used our FDLNP method with Hadoop to get the ability to process a huge number of images in a reasonable time.

Highlights

  • In the latest decade, the number of digital photos and videos accessible has increased dramatically

  • To get the ability to process a huge number of images, we reduced the size of Full Direction Local Neighbor Pattern (FDLNP)’s feature vector by proposing extended versions of our proposed method

  • We proposed a novel method for retrieving color and texture images, known as FDLNP that start with generating the min and max Quantizer, followed by Eight Neighbors Euclidean Decimal Coding matrix and Full Directions Matrixes

Read more

Summary

Introduction

The number of digital photos and videos accessible has increased dramatically. Software and hardware are available to digitize, archive, and compress multimedia data, there are no clear ways to retrieve the kept info. In traditional Text-Based Image Retrieval (TBIR), metadata that describes the image contents are manually added to image files. This metadata is used to retrieve similar images by word matching. TBIR has two major difficulties: (a) labeling the images manually, a significant volume of time is required, and (b) human sensitivity for images is not precise and unique. Content-based image retrieval (CBIR) [1,2,3,4,5] is an alternative to traditional TBIR that overcomes the above limitations because images are retrieved based on their content such as color, texture, shapes, and contour

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.