Abstract

In the field of remote sensing, cloud and cloud shadow will result in optical remote sensing image contamination, particularly high cloud cover, which will result in the complete loss of certain ground object information. The presence of thick cloud severely limits the use of optical images in production and scientific research, so it is critical to conduct further research into removing the thick cloud occlusion in optical images to improve the utilization rate of optical images. The state-of-the-art cloud removal methods proposed are largely based on convolutional neural network (CNN). However, due to CNN’s inability to gather global content information, those cloud removal approaches cannot be improved further. Inspired by the transformer and multisource image fusion cloud removal method, we propose a transformer-based cloud removal method (Former-CR), which directly reconstructs cloudless images from SAR images and cloudy optical images. The transformer-based model can efficiently extract and fuse global and local context information in SAR and optical images, generating high-quality cloudless images with higher global consistency. In order to enhance the global structure, local details, and visual effect of the reconstructed image, we design a new loss function to guide the image reconstruction. A comparison with several SAR-based cloud removal methods through qualitative and quantitative experimental evaluation on the SEN12MS-CR dataset demonstrates that our proposed method is effective and superior.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call