Abstract

In the optical remote sensing and earth observation fields, clouds severely obscure the land’s visibility and degrade the image. In recent years, there have been many excellent efforts to mitigate the effects of cloud cover. However, it has been found that there will be some blurs in the area if a single degraded image is restored by autoencoder-based methods. This letter focuses on removing clouds from single optical remote sensing images by autoencoder-based methods without multitemporal information while at the same time mitigating blurs caused by missing information. Therefore, we propose a novel cloud removal method that combines image inpainting and image denoising, called the Cloud-Aware Generative Network (CAGN). The CAGN consists of two stages: the first stage is a recurrent convolution network for potential cloud region detection and the second is an autoencoder for cloud removal. The method uses a side-guided method that adds attention mechanisms in the first stage to assist in predicting the mask. Furthermore, to update the mask adaptively for restoring degraded image areas greedily, the method embeds partial convolution in the autoencoder to condition the convolution calculation of pixels in the regions of thick clouds at different layers. Extensive experiments demonstrate clearly that CAGN can easily achieve a considerable increase in the peak signal-to-noise ratio (PSNR) and the structural similarity index (SSIM) compared with a competitive baseline model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call