Abstract

Numerous state-of-the-art transformer-based techniques with self-attention mechanisms have recently been demonstrated to be quite effective in the classification of hyperspectral images (HSIs). However, traditional transformer-based methods severely suffer from the following problems when processing HSIs with three dimensions: (1) processing the HSIs using 1D sequences misses the 3D structure information; (2) too expensive numerous parameters for hyperspectral image classification tasks; (3) only capturing spatial information while lacking the spectral information. To solve these problems, we propose a novel Quaternion Transformer Network (QTN) for recovering self-adaptive and long-range correlations in HSIs. Specially, we first develop a band adaptive selection module (BASM) for producing Quaternion data from HSIs. And then, we propose a new and novel quaternion self-attention (QSA) mechanism to capture the local and global representations. Finally, we propose a new and novel transformer method, <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">i.e</i> ., QTN by stacking a series of QSA for hyperspectral classification. The proposed QTN could exploit computation using Quaternion algebra in hypercomplex spaces. Extensive experiments on three public datasets demonstrate that the QTN outperforms the state-of-the-art vision transformers and convolution neural networks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call