Abstract

Recently, convolutional neural networks (CNNs) have been widely used in hyperspectral image (HSI) classification with appreciable performance. However, the current CNN-based HSI classification methods have limitations in exploiting the multiscale features and extracting sufficiently discriminative features, and usually adopted dimensionality reduction method such as PCA leads to some or all of the physical information of the original band may be lost. To address the above problems, in this letter, we propose an adaptive multiscale feature attention network (AMFAN) for HSI classification. First, we use a band selection algorithm to perform data dimensionality reduction, which helps maintain the original characteristics of the image. Second, different from existing multiscale feature extraction methods that give features of different scales the same degree of importance, we propose an adaptive multiscale feature residual module (AMFRM) to give multiscale features different importance. Finally, due to the input of the HSI classification model based on deep learning being the patch cube, the only available initial information is the category of the center pixel. However, the patch often contains pixels different from the center pixel category, and existing attention mechanisms do not consider the impact of such pixels on HSI classification, so we design a novel position attention module (PAM) to calculate the similarity between the center (target) pixel and surrounding pixels, and then pay more attention to the pixels with high similarity to the center pixel. Besides, we also use a spectral attention module (SAM) to obtain more discriminative spectral features. Experimental results show that the proposed AMFAN effectively improves the classification accuracy, and outperforms the state-of-the-art CNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call