Abstract

Satellite images are often contaminated by clouds. Cloud removal has received special attention due to the wide range of satellite image applications. As the clouds thicken, the process of removing them becomes more challenging. In such cases, using auxiliary images, such as near-infrared or synthetic aperture radar (SAR), for reconstructing is common. In this study, we attempt to solve the problem using two generative adversarial networks (GANs): the first translates SAR images to optical images and the second removes clouds using the translated images of prior GAN. Also, we propose dilated residual inception blocks (DRIBs) instead of vanilla U-net in the generator networks and use structural similarity index measure (SSIM) in addition to the L1 loss function. Reducing the number of downsamplings and expanding receptive fields by dilated convolutions increased the quality of output images. We used the SEN1-2 dataset to train and test both GANs, and we made cloudy images by adding synthetic clouds to optical images. In addition, we used the SEN12MS-CR dataset to test network performance to remove real clouds. The restored images are evaluated using PSNR, SSIM, SAM, MAE, RMSE, and <inline-formula> <tex-math notation="LaTeX">$Q$ </tex-math></inline-formula>. We compared the proposed method with state-of-the-art deep learning models and achieved more accurate results in both SAR-to-optical image translation and cloud removal parts.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call