Abstract

Hyperspectral dimensionality reduction (HDR), an important preprocessing step prior to high-level data analysis, has been garnering growing attention in the remote sensing community. Although a variety of methods, both unsupervised and supervised models, have been proposed for this task, yet the discriminative ability in feature representation still remains limited due to the lack of a powerful tool that effectively exploits the labeled and unlabeled data in the HDR process. A semi-supervised HDR approach, called iterative multitask regression (IMR), is proposed in this paper to address this need. IMR aims at learning a low-dimensional subspace by jointly considering the labeled and unlabeled data, and also bridging the learned subspace with two regression tasks: labels and pseudo-labels initialized by a given classifier. More significantly, IMR dynamically propagates the labels on a learnable graph and progressively refines pseudo-labels, yielding a well-conditioned feedback system. Experiments conducted on three widely-used hyperspectral image datasets demonstrate that the dimension-reduced features learned by the proposed IMR framework with respect to classification or recognition accuracy are superior to those of related state-of-the-art HDR approaches.

Highlights

  • Multitask regression with graph learning In the multitask framework, we propose a learning-based graph regularization instead of a fixed graph artificially constructed with the known kernels, in order to depict the connectivity between samples

  • That inspired by latent subspace learning, the joint learning (JL) model dramatically outperforms feature space discriminant analysis (FSDA), and improves the Overall Accuracy (OA) of around 4%, 6%, 2%, and 1%, respectively, compared to those

  • Following the JL-like model, the proposed iterative multitask regression (IMR) framework achieves the best performance owing to the multitask learning framework, where the labeled and unlabeled samples can be jointly regressed, and to the iterative updating strategy of pseudo-labels

Read more

Summary

Introduction

Hyperspectral imaging in sensing techniques has garnered growing attention for many remote sensing tasks (Plaza et al, 2009), such as land-use and land-cover classification (Yu et al, 2017; Gan et al, 2018; Hang et al, 2019), large-scale urban or agriculture mapping (Dell’Acqua et al, 2004; Yang et al, 2013; Fan et al, 2015; Xie and Weng, 2017), spectral unmixing (Henrot et al, 2016; Hong et al, 2017; Zhong et al, 2016; Hong et al, 2019a), object detection (McCann et al, 2017; Wu et al, 2018; Li et al, 2018; Wu et al, 2019), and multimodal scene interpretation (Tuia et al, 2016; Yokoya et al, 2018; Zhu et al, 2019; Liu et al, 2019), as forthcoming spaceborne spectroscopy imaging satellites (e.g., EnMAP (Guanter et al, 2015)) make hyperspectral imagery (HSI) available on a larger scale. With the significant support in both theory and practice as well as a fact that the learning-based strategy is somehow superior to the manually-designed feature extraction (Hong et al, 2016a), a considerable number of subspace learning approaches have been designed and applied to hyperspectral data processing and analysis in the past decades (Licciardi et al, 2009; Huang and Yang, 2015; Hong et al, 2016b; Luo et al, 2016; Liu et al, 2017; Xu et al, 2018a; Xu et al, 2019), hyperspectral dimensionality reduction (HDR) (Gao et al, 2017a; Hong et al, 2017; Gao et al, 2017b) and spectral band selection (Sun et al, 2015; Sun et al, 2017a). A general but effective work integrating LDA with LPP, called semi-supervised local discriminant analysis (SELD), was proposed in Liao et al (2013) for a semi-supervised hyperspectral feature extraction.Inspired by GLP, (Zhao et al, 2014) enhanced the performance of LDA by jointly utilizing the labels and “soft-labels” predicted by GLP for the semi-supervised subspace dimensionality reduction. Wu and Prasad (2018) proposed a similar approach to achieving a semi-supervised discriminative dimensionality reduction of HSI by embedding pseudo-labels (instead of the similarity measurement in LPP (Liao et al, 2013)) into LFDA rather than LDA in Zhao et al (2014)

Motivation and objectives
Method overview and contributions
The proposed methodology
Review of the JL model
Modal learning
Convergence analysis and computational complexity
Data description
Experimental configuration
Results and analysis
Parameter sensitivity analysis
Computational cost in different methods
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.