Abstract

With the innovation of sensors, communication, computer and other technologies, remote sensing (RS) imaging methods show a diversification trend. At present, relying on a variety of earth observation platforms, more multi-source RS data can be obtained, which overcomes the inadequacy and incompleteness of the new earth observation information of single source RS image. In this paper, a classification method of hyperspectral images (HSI) and LiDAR data based on spectral-spatial-elevation fusion Transformer (S2EFT) framework is proposed. The Transformer framework is introduced into the task of multi-source RS image classification. Two simple but effective modules, spatial information recognition module and sliding group spectral embedding module, are added to the proposed framework, and the Patch form commonly used in traditional convolutional neural networks (CNNs) is used as the input. It effectively solves the shortcomings of Transformer’s insufficient attention to local information, reduces redundant spatial information, enhances the transmission of information, and fully integrates multidimensional information of pixels in same position. Experiments on three real data sets are compared with existing methods, and the best results are obtained by the proposed methods. The source code can be downloaded from https://github.com/SYFYN0317/S2EFT.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call