Abstract

Cloud cover is a serious impediment in land surface analysis from Remote Sensing images either causing complete obstruction (thick clouds) with loss of information or blurry effects when being semi-transparent (thin clouds). While thick clouds require complete pixel replacement, thin cloud removal is fairly challenging as the atmospheric and land-cover information is inter-twined. In this paper, we address this problem and propose a Cloud-GAN to learn the mapping between cloudy images and cloud-free images. The adver-sarialloss in the proposed method constrains the distribution of generated images to be close enough to the underlying distribution of the non-cloudy images. An additional cycle consistency loss is used to further restrain the generator to predict cloud-free images only of the same scene as reflected in the cloudy images. Our method not only rejects the necessity of any paired (cloud/cloud-free) training dataset but also avoids the need of any additional (expensive) spectral source of information such as Synthetic Aperture Radar imagery which is cloud penetrable. Lastly, we demonstrate the efficacy of our technique by training on an openly available and fairly new Sentinel-2 Imagery dataset consisting of real clouds. We also show significant improvement in PSNR values after removing clouds on synthetic images thus validating the competency of our methodology.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call