Abstract
Cloud removal is a relevant topic in Remote Sensing, fostering medium- and high-resolution optical image usability for Earth monitoring and study. Recent applications of deep generative models and sequence-to-sequence-based models have proved their capability to advance the field significantly. Nevertheless, there are still some gaps: the amount of cloud coverage, the landscape temporal changes, and the density and thickness of clouds need further investigation. We fill some of these gaps in this work by introducing an innovative deep model. The proposed model is multi-modal, relying on both spatial and temporal sources of information to restore the whole optical scene of interest. We use the outcomes of both temporal-sequence blending and direct translation from Synthetic Aperture Radar (SAR) to optical images to obtain a pixel-wise restoration of the whole scene. The reconstructed images preserve scene details without resorting to a considerable portion of a clean image. Our approach’s advantage is demonstrated across various atmospheric conditions tested on different datasets. Quantitative and qualitative results prove that the proposed method obtains cloud-free images coping with landscape changes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Geoscience and Remote Sensing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.