Abstract

Hyperspectral image classification (HSIC) has generated considerable interests over the past years. However, one of challenging issues arising in HSIC is inconsistent classification, which is mainly caused by random training sampling (RTS) of selecting training data. This is because a different set of training samples may produce a different classification result. A general approach to addressing this problem is the so-called K -fold method which implements RTS K times and takes the average of overall accuracy with respect to standard deviation to describe a confidence level of classification performance. To deal with this issue, this article develops an iterative RTS (IRTS) method as an alternative to the K -fold method to reduce the uncertainty caused by RTS. Its idea is to add the spatial filtered classification maps to the image cube that is currently being processed via feedback loops to augment image cubes iteratively. Then, the training samples will be reselected randomly from the new augmented image cubes iteration-by-iteration. As a result, the training samples selected from each iteration will be updated by new added spatial information captured by spatial filters implemented at the iteration. The experimental results clearly demonstrate that IRTS successfully improves classification accuracy as well as reduces inconsistency in results.

Highlights

  • H YPERSPECTRAL image classification (HSIC) has received considerable interest, for instance, [1]–[7], to name just a few

  • 2) To measure the effectiveness of iterative RTS (IRTS)-SS, several new information measures are derived from an information theory, such as class self-information (CSI), class entropy (CE), overall CE (OCE)/average CE (ACE), and sample entropy (SE). 3) By virtue of IRTS, many SS classifiers, such as ISVM [18], edge preserving filter (EPF) [2], iterative EPF (IEPF) [17], can be readily extended to their corresponding IRTS versions, IRTS-support vector machine (SVM), IRTS-Gaussian, IRTS-Gabor, IRTS-EPF, and IRTS-IEPF. 4) Since EPF is strongly affected by its use of spatial filters (SFs), two new fusion methods, referred to as IRTS-GEPF, which fuses Gaussian filters with EPF, and IRTS-Gabor-EPF, which fuses Gabor filters with EPF, are further proposed to improve classification performance

  • This article develops a new approach to HSIC, called IRTS-SS which can reduce the classification inconsistency and uncertainty caused by random training sampling (RTS) so as to improve classification accuracy

Read more

Summary

INTRODUCTION

H YPERSPECTRAL image classification (HSIC) has received considerable interest, for instance, [1]–[7], to name just a few. IRTS randomly selects a new set of training samples from an augmented data cube expanded by including additional spatial classification information obtained from the preceding iteration, whereas the K-fold method randomly selects training samples from the same original dataset K times independently. One-fold nI -iteration RTS with K = nI randomly selects new sets of training samples iteratively from nI augmented data cubes which are generated iteration-by-iteration via feedback loops Another is how to determine the parameter “K” in the K-fold method and “nI ”. On the other hand, when K-fold IRTS is referred, it implements IRTS K times independently using K different sets of training samples randomly selected from the same original data cube, but each fold implemented by IRTS is a single-fold IRTS, which is one-fold nI -iteration RTS In such a K-fold IRTS, there are K different values of nI , each of which is determined by the stopping rule of a single one-fold IRTS, i.e., one-fold nI -iteration RTS. Conventional SS classification can be extended to IRTS-SS classification. 2) To measure the effectiveness of IRTS-SS, several new information measures are derived from an information theory, such as CSI, CE, overall CE (OCE)/average CE (ACE), and sample entropy (SE). 3) By virtue of IRTS, many SS classifiers, such as ISVM [18], EPF [2], IEPF [17], can be readily extended to their corresponding IRTS versions, IRTS-SVM, IRTS-Gaussian, IRTS-Gabor, IRTS-EPF (including bilateral and guided filters), and IRTS-IEPF. 4) Since EPF is strongly affected by its use of SFs, two new fusion methods, referred to as IRTS-GEPF, which fuses Gaussian filters with EPF, and IRTS-Gabor-EPF, which fuses Gabor filters with EPF, are further proposed to improve classification performance

RANDOM CLASSIFICATION
RTS THEORY
Probability of Classification
Uncertainty Reduction
SS CLASSIFICATION USING IRTS
INFORMATION MEASURES DERIVED FOR RTS
Classification Entropy
PERFORMANCE EVALUATION
EXPERIMENTS AND DISCUSSION OF RESULTS
Purdue Indiana Indian Pines
Salinas
University of Pavia
Novelties of IRTS Theory
Discussions
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.