Abstract

ABSTRACT Spatiotemporal fusion technology provides a feasible, economical solution for generating remote sensing images with high spatiotemporal resolution. The recently proposed learning-based method achieved high accuracy; however, its network structure is relatively simple, and the deep features of the input image cannot be obtained, so that the fused image cannot restore good landform details and the quality is not very good. Moreover, most methods use a single pixel-level (MSE) loss, which makes recovering high-frequency details difficult, resulting in a reduction in the fusion accuracy. In this paper, we propose an edge structure loss, which is added to a spatiotemporal fusion network without pre training model. To fully extract the spectral information and spatial details of the image, we propose a DenseNet-BC module for image fusion tasks, which makes the features more easily transmittable in the whole network. This improvement also enables the network to perform spatiotemporal fusion with better generalizability and robustness. In addition, we propose an edge loss to further improve the accuracy of the model fusion results. Experiments with existing spatiotemporal fusion algorithms in different regions show that our proposed method is more fault tolerant and achieve a higher accuracy in terms of quality evaluation indicators and better visual effects.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.