Abstract

Due to technical and budget constraints on current optical satellites, the acquisition of satellite images with the best resolutions is not practicable. In this article, aiming to produce products with high spectral (HS) and temporal resolutions, we introduced a two-stream spectral–temporal fusion technique based on attention mechanism called STA-Net. STA-Net aims to combine high spectral and low temporal (HSLT) resolution images with low spectral and high temporal (LSHT) resolution images to generate products with the best characteristics. The proposed technique involves two stages. In the first one, two fused images are generated by a two-stream architecture based on residual attention blocks. The temporal difference estimator stream estimates the temporal difference between HS images at desired and neighboring dates. The reflectance difference estimator is the second stream. It predicts the reflectance difference between the input images (HS–LS) to map LS images into HS products. In the second stage, a reconstruction network combines the latter two-stream outputs via an effective learnable weighted-sum strategy. The two-stage model is trained in an end-to-end fashion using an effective loss function to ensure the best fusion quality. To the best of our knowledge, this work represents the first attempt to address the spectral–temporal fusion using an end-to-end deep neural network model. Experimental results conducted on two actual datasets of Sentinel-2 (HSLT:10 spectral bands and long revisit period) and Planetscope (LSHT: four spectral bands and daily images) images, which proved the effectiveness of the proposed technique with respect to baseline technique.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.