Abstract

Recently, convolutional neural networks (CNNs) are successfully applied to extract abstract features of hyperspectral image (HSI), and they obtained competitive performances in HSI classification. However, HSI has inhomogeneous pixels or inherent spectral correlation, and the classification performance of CNN on HSI data will be degraded by modeling all information with equal importance. To address the above issues, we propose an attention mechanism-based method termed multi-level feature network with spectral–spatial attention model (MFNSAM), which consists of a multi-level feature CNN (MFCNN) and a spectral–spatial attention module (SSAM). Due to rich spectral information and spatial distribution in HSI data, MFCNN is employed as multi-scale fusion architecture to bridge the gaps between multi-level features. Specifically, the MFCNN extracts diverse information by compounding the representations generated by each tunnel of multi-scale filter group. To improve the representational capacity in spatial and spectral domains, the channel-wise attention branch is exploited to suppress redundant spectral information, and the spatial-wise attention is designed to explore the contextual information for better refinement. Thus, the SSAM is formed by merging the two branches to adaptively recalibrate the nonlinear interdependence of deep spectral–spatial features. Experiments on University of Pavia, Heihe, and Kennedy Space Center hyperspectral data sets demonstrate that the proposed model provide competitive results to state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call