Abstract

Two-dimensional fuzzy entropy, dispersion entropy, and their multiscale extensions ( and , respectively) have shown promising results for image classifications. However, these results rely on the selection of key parameters that may largely influence the entropy values obtained. Yet, the optimal choice for these parameters has not been studied thoroughly. We propose a study on the impact of these parameters in image classification. For this purpose, the entropy-based algorithms are applied to a variety of images from different datasets, each containing multiple image classes. Several parameter combinations are used to obtain the entropy values. These entropy values are then applied to a range of machine learning classifiers and the algorithm parameters are analyzed based on the classification results. By using specific parameters, we show that both and approach state-of-the-art in terms of image classification for multiple image types. They lead to an average maximum accuracy of more than 95% for all the datasets tested. Moreover, results in a better classification performance than that extracted by as a majority. Furthermore, the choice of classifier does not have a significant impact on the classification of the extracted features by both entropy algorithms. The results open new perspectives for these entropy-based measures in textural analysis.

Highlights

  • Information theory, relative entropy, and the Kullback–Leibler divergence are widely used concepts

  • The results show that MFuzzyEn2D was computationally faster than MDispEn2D, achieving an average computation time of 0.29 and 4.12 seconds/per image for image sizes of 50 × 50 px and 100 × 100 px, respectively

  • Textural features extracted by MFuzzyEn2D resulted in better classification performance than those extracted by the MDispEn2D as a majority

Read more

Summary

Introduction

Information theory, relative entropy, and the Kullback–Leibler divergence are widely used concepts (see, e.g., References [1,2,3]). One dimensional entropy measures, e.g., sample entropy (SampEn1D ) [7], permutation entropy (PerEn1D ) [8], fuzzy entropy (FuzzyEn1D ) [9], and dispersion entropy (DispEn1D ) [10], have been proven effective at quantifying the irregularity of time series data. This success has led to the development of bidimensional (2D) entropy measures for images (2D data): SampEn2D [11], PermEn2D [12,13], FuzzyEn2D [14], and DispEn2D [15]. Classification from texture analysis has important applications in a large variety of fields such as medical image analysis, remote sensing, content-based image retrieval, object recognition, and many others (see, e.g., [16,17,18,19,20])

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.