Abstract

The goal of hyperspectral image (HSI) classification is to assign land-cover labels to each HSI pixel in a patch-wise manner. Recently, sequential models, such as recurrent neural networks (RNN), have been developed as HSI classifiers which need to scan the HSI patch into a pixel-sequence with the scanning order first. However, RNNs have a biased ordering that cannot effectively allocate attention to each pixel in the sequence, and previous methods that use multiple scanning orders to average the features of RNNs are limited by the validity of these orders. To solve this issue, it is naturally inspired by Transformer and its self-attention to discriminatively distribute proper attention for each pixel of the pixel-sequence and each scanning order. Hence, in this study, we further develop the sequential HSI classifiers by a specially designed RNN-Transformer (RT) model to feature the multiple sequential characters of the HSI pixels in the HSI patch. Specifically, we introduce a multiscanning-controlled positional embedding strategy for the RT model to complement multiple feature fusion. Furthermore, the RT encoder is proposed for integrating ordering bias and attention re-allocation for feature generation at the sequence-level. Additionally, the spectral-spatial-based soft masked self-attention is proposed for suitable feature enhancement. Finally, an additional Fusion Transformer is deployed for scanning order-level attention allocation. As a result, the whole network can achieve competitive classification performance on four accessible datasets than other state-of-the-art methods. Our study further extends the research on sequential HSI classifiers.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.