Abstract

ABSTRACT Hyperspectral image (HSI) classification methods usually combine spatial information and rich spectral information effectively to reflect the delicate features of ground objects, improve classification accuracy, and lay a foundation for further analysis of ground objects. However, the current research on HSI classification mainly focuses on the joint extraction of spectral-spatial features and does not fully consider the difference in the size of ground objects corresponding to feature extraction. To alleviate the problem, we propose a new multi-scale and spectral-spatial attention network (MS3A-Net), which contains a proposed multi-scale attention feature extraction (MAFE) block and a spectral-spatial cooperative attention (SSCA) block in the front end of the network. The proposed MAFE block can extract the features of ground objects at different scales and minimize the interference of different sizes of ground objects. The proposed MS3A-Net can increase the influence of effective pixels in both spatial and spectral dimensions and suppress the effect of invalid or even disturbing pixels. A large number of quantitative and qualitative experimental results on Indian Pines, Salinas Valley, and University of Pavias data sets show the performance of our proposed network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call