Abstract

Cloud removal is a ubiquitous and important task in remote sensing image processing, which aims at restoring the ground regions shadowed by clouds. It is challenging to remove the clouds for a single satellite image due to the difficulty of distinguishing clouds from white objects on the ground and filling the irregular missing regions with visual consistency. In this article, we propose a novel two-stage cloud removal method. The first stage is cloud segmentation, i.e., extracting the clouds and removing the thin clouds directly using U-Net. The second stage is image restoration, i.e., removing the thick cloud and recovering the corresponding irregular missing regions using generative adversarial network (GAN). We evaluate the proposed scheme on both synthetic images and real satellite images (over <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$20\,000\, \times \,20\,000$ </tex-math></inline-formula> pixels). On synthetic images for cloud coverage less than 40%, the proposed scheme achieves improvements of 0.049–0.078 in Structural SIMilarity (SSIM) and 3.8–6.2 dB in peak signal-to-noise ratio (PSNR), while the <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$\ell _{1}$ </tex-math></inline-formula> -norm error reduces by 49%–78%, compared with a state-of-the-art deep learning method Pix2Pix. On real satellite images, we demonstrate the consistent visual results of the proposed scheme.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call