Abstract

Deep spectral–spatial features fusion has become a research focus in hyperspectral image (HSI) classification. However, how to extract more robust spectral–spatial features is still a challenging problem. In this article, a novel deep multilayer fusion dense network (MFDN) is proposed to improve the performance of HSI classification. The proposed MFDN simultaneously extracts the spatial and spectral features based on different sample input sizes, which can extract abundant spectral and spatial correlation information. First, the principal component analysis algorithm is performed on hyperspectral data to extract low-dimensional HSI data, and then the spatial features are extracted from the low-dimensional 3-D HSI data through 2-D convolutional, 2-D dense block, and average-pooling layers. Second, the spectral features are extracted directly from the raw 3-D HSI data by means of 3-D convolutional, 3-D dense block, and average-pooling layers. Third, the spatial and spectral features are fused together through 3-D convolutional, 3-D dense block, and average-pooling layers. Finally, the fused spectral–spatial features are sent into two full connection layers to extract high-level abstract features. Furthermore, densely connected structures can help alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and improve the HSI classification accuracy. The proposed fusion network outperforms the other state-of-the-art methods especially with a small number of labeled samples. Experimental results demonstrate that it can achieve outstanding hyperspectral classification performance.

Highlights

  • H YPERSPECTRAL sensors can capture hundreds of narrow spectral channels with very high spectral resolution.Manuscript received August 14, 2019; revised December 3, 2019; accepted March 18, 2020

  • The effectiveness of our method is proved in three real-world hyperspectral remote sensing datasets, which contain the Indian Pines (IN), the University of Pavia (UP), and the Kennedy Space Center (KSC) datasets

  • Compared with other deep learning methods, the multilayer fusion dense network (MFDN) extracts spatial and spectral features based on different sample input sizes

Read more

Summary

INTRODUCTION

H YPERSPECTRAL sensors can capture hundreds of narrow spectral channels with very high spectral resolution. Yang et al [17] designed a Two-CNN model to learn the spectral features and spatial features jointly In this framework, the input of spectral data is a one-dimensional (1-D) dimension, which leads to the lack of neighborhood information in the spatial dimension. The input of the spatial block is based on the spectral block in the SSRN and FDSSC, and the spatial learning will lose spatial information To solve these problems and extract more discriminative fusion features, we propose a novel deep multilayer fusion dense network (MFDN) for HSI classification. The MFDN simultaneously extracts the spatial and spectral features based on different sample input sizes, and the spatial and spectral features are fused together through multilayer fusion strategy with a densely connected structure. The spatial features are extracted from the low-dimensional 3-D HSI data through 2-D convolutional, 2-D dense block, and average-pooling layers.

PROPOSED FRAMEWORK
Spatial Feature Extraction
Spectral Feature Extraction
Spectral–Spatial Feature Extraction
Experimental Datasets
Experimental Settings
Analysis of Parameters
Experiment Results and Analysis
Discussions
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call