Abstract

Limited by the sensor technology and budget, spatiotemporal fusion (STF) of remote sensing images (RSIs) is an effective strategy to obtain products with dense time and high spatial resolution. The STF methods generally require at least three observed images, and the predicted results depend on the reference date. The lack of high-quality fine-resolution images in practice makes the STF model difficult to apply, and few models consider the bias caused by different sensors. To solve the above problems, an enhanced STF model with degraded fine-resolution images via relativistic generative adversarial networks is proposed, called EDRGAN-STF, which is an end-to-end network with all bands trained simultaneously. To reduce the bias caused by different sensors, a degraded resolution version of the Landsat image with an arbitrary date is introduced into the STF model. The inputs of EDRGAN-STF only contain a MODIS image of the predicted date, a reference Landsat image and degraded version with an arbitrary date. EDRGAN-STF consists of a generator and a relativistic average least squares discriminator (RaLSD). A dual-stream residual dense block is designed to fully obtain the local and global spatial details and low-frequency information in the generator. A multihierarchical feature fusion block is designed to fuse global information. A spectral-spatial attention mechanism is employed in the generator, which focuses on important spectral bands and spatial features and enhances the reconstruction of critical regions. A new composite loss function is introduced to better optimize the designed STF model. To verify the capability of the EDRGAN-STF, extensive experiments are conducted on two typical Landsat-MODIS datasets, and results illustrate that EDRGAN-STF improves the STF accuracy and has great prospects for practical applications. Key policy highlights: The proposed EDRGAN-STF improves spatiotemporal fusion accuracy and has great prospects for practical applications. A degraded resolution version of the Landsat image with an arbitrary date is introduced into the spatiotemporal fusion model. A dual-stream residual dense block is designed to fully obtain the local and global spatial details and low-frequency information. A spectral-spatial attention mechanism focuses on important spectral bands and spatial features and enhances the reconstruction of critical regions. A multihierarchical feature fusion block fuses global information and a new composite loss function is introduced to better optimize the model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.