Abstract

In recent years, deep learning has been widely used in hyperspectral image (HSI) classification and has shown good capabilities. Particularly, the use of convolutional neural network (CNN) in HSI classification has achieved attractive performance. However, HSI contains a lot of redundant information, and the CNN-based model is limited by the receptive field of CNN and cannot balance the performance and depth of the model. Furthermore, considering that HSI can be regarded as sequence data, CNN-based models cannot mine sequence features well. In this paper, we propose a model named SSA-Transformer to address the above problems and extract spectral-spatial features of HSI more efficiently. The SSA-Transformer model combines a modified CNN-based spectral-spatial attention mechanism and a self-attention-based transformer with dense connection. The SSA-Transformer model can combine the local and global features of HSI to improve the performance of the model. A series of experiments showed that the SSA-Transformer achieved competitive classification accuracy compared with other CNN-based classification methods using three HSI datasets: University of Pavia (PU), Salinas (SA), and Kennedy Space Center (KSC).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call