Abstract

Hyperspectral image classification (HSIC) methods usually require more training samples for better classification performance. However, a large number of labeled samples are difficult to obtain because it is cost- and time-consuming to label an HSI in a pixel-wise way. Therefore, how to overcome the problem of insufficient accuracy and stability under the condition of small labeled training sample size (SLTSS) is still a challenge for HSIC. In this paper, we proposed a novel multiple superpixel graphs learning method based on adaptive multiscale segmentation (MSGLAMS) for HSI classification to address this problem. First, the multiscale-superpixel-based framework can reduce the adverse effect of improper selection of a superpixel segmentation scale on the classification accuracy while saving the cost to manually seek a suitable segmentation scale. To make full use of the superpixel-level spatial information of different segmentation scales, a novel two-steps multiscale selection strategy is designed to adaptively select a group of complementary scales (multiscale). To fix the bias and instability of a single model, multiple superpixel-based graphical models obatined by constructing superpixel contracted graph of fusion scales are developed to jointly predict the final results via a pixel-level fusion strategy. Experimental results show that the proposed MSGLAMS has better performance when compared with other state-of-the-art algorithms. Specifically, its overall accuracy achieves 94.312%, 99.217%, 98.373% and 92.693% on Indian Pines, Salinas and University of Pavia, and the more challenging dataset Houston2013, respectively.

Highlights

  • Hyperspectral images (HSIs) contain hundreds of bands [1,2,3], which provide rich spectral information and spatial information [4,5]

  • To reduce the bias and variance of classification results when the number of training samples is very small and select a group of multi-scale superpixel maps with complementary information, a multiple superpixel graphs learning method based on adaptive multiscale segmentation (MSGLAMS) is proposed for HSI classification (HSIC) in this paper

  • The proposed MSGLAMS adopts the multiscale-superpixel-based framework, which can reduce the adverse effect of improper selection of a superpixel segmentation scale on the classification accuracy while saving the cost to manually seek a suitable segmentation scale

Read more

Summary

Introduction

Hyperspectral images (HSIs) contain hundreds of bands [1,2,3], which provide rich spectral information and spatial information [4,5]. To reduce the bias and variance of classification results when the number of training samples is very small and select a group of multi-scale superpixel maps with complementary information, a multiple superpixel graphs learning method based on adaptive multiscale segmentation (MSGLAMS) is proposed for HSIC in this paper. The multiscale selection method base on sparse representation (MSSR) is proposed to select fusion scales that have positive contribution to supplement the spatial information of ORS. Multiple superpixel-based graphical models (SGL-based model), which are created via constructed superpixel contracted graph of determined scales (fusion scales pool), are adopted to jointly predict the final classification results, that is, pixel-level labels are determined via the voting results of these different models This Boosting-like fusion strategy can significantly reduce the bias and instability of the final results, and keep the similar inductive bias of models

Proposed Method
Construction of the Candidate Scale Pool
Optimal Reference Scale Selection
Graph Construction
Label Propagation
Multiscale Selection Based on Sparse Representation
A Pixel-Level Fusion Strategy for Multiple Graphical Models
Datasets
Experimental Settings
Experimental Settings for Comparing with Other State of the Arts
Experimental Settings of ORSSA
Experimental Settings of MSSR
Comparison of Results of Different Methods
Analysis of Experimental Results of the ORASS and MSSR
Parameter Analysis
Parameter Analysis of λ and κ
Effect of Different Number of Training Samples
Running Time Comparison
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.