Abstract

Cloud and cloud shadow cause information loss in optical remote sensing analysis. South East Asia, especially Vietnam, Sentinel-2 imagery has short re-visit cycle and observations tend to be contaminated with cloud and cloud shadow. Traditional cloud removal methods require close date multi-temporal data to avoid seasonal land cover changes. In this study, a method of integrating Deep Convolutional Neural Networks (DCNN) and Generative Adversarial Network (GAN) was proposed. This machine learning model estimates the information loss over cloud contaminated areas on a single Sentinel-2 image. The results show that for images with cloud cover rate under 25%, our model can reconstruct cloudless images with PSNR (25 – 40 dB) and SSIM (0.86 – 0.93) compared to real clear images. On the other hand, with cloud cover rate up to 40%, the model performance will be affected heavily by the distribution of cloud and cloud shadow areas. By investigating DCNN and GAN, our method has proven to be an effective tool to remove cloudy images with low and medium rates, which enriches the clear optical remote sensing data sources for environment monitoring.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call