Abstract

Due to the limitations of current technology and budget, a single satellite sensor can not obtain high spatiotempo- ral resolution remote sensing images. Therefore, remote sensing image spatio-temporal fusion technology is considered as an effective solution and has attracted extensive attention. In the field of deep learning, due to the fixed size of the perception field of convolutional neural network, it is impossible to model the correlation of global features, and the features extracted only through convolution operation lack the ability to capture long- distance features, At the same time, complex fusion methods can not better integrate temporal and spatial features. In order to solve these problems, we propose a multi-stage remote sensing image spatio-temporal fusion model based on Texture Trans- former and convolutional neural network. The model combines the advantages of Transformer and convolutional network, uses a lightweight convolution network to extract spatial features and temporal discrepancy features, uses Transformer to learn global temporal correlation, and finally fuses temporal features with spatial features. In order to make full use of the features obtained in different stages, we design a cross-stage adaptive fusion module CSAFM. The module adopts the self attention mechanism to adaptively integrate the features of different scales while considering the temporal and spatial characteristics. To test the robustness of the model, the experiments are carried out on three datasets of CIA, LGC and DX. Compared with five typical spatio-temporal fusion algorithms, we obtain excellent results, which prove the superiority of MSFusion model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.