Abstract

Clouds are one of the most serious disturbances when using satellite imagery for ground observations. The semi-translucent nature of thin clouds provides the possibility of 2D ground scene reconstruction based on a single satellite image. In this paper, we propose an effective framework for thin cloud removal involving two aspects: a network architecture and a training strategy. For the network architecture, a Wasserstein generative adversarial network (WGAN) in YUV color space called YUV-GAN is proposed. Unlike most existing approaches in RGB color space, our method performs end-to-end thin cloud removal by learning luminance and chroma components independently, which is efficient at reducing the number of unrecoverable bright and dark pixels. To preserve more detailed features, the generator adopts a residual encoding–decoding network without down-sampling and up-sampling layers, which effectively competes with a residual discriminator, encouraging the accuracy of scene identification. For the training strategy, a transfer-learning-based method was applied. Instead of using either simulated or scarce real data to train the deep network, adequate simulated pairs were used to train the YUV-GAN at first. Then, pre-trained convolutional layers were optimized by real pairs to encourage the applicability of the model to real cloudy images. Qualitative and quantitative results on RICE1 and Sentinel-2A datasets confirmed that our YUV-GAN achieved state-of-the-art performance compared with other approaches. Additionally, our method combining the YUV-GAN with a transfer-learning-based training strategy led to better performance in the case of scarce training data.

Highlights

  • With the rapid development of remote sensing technology, high-resolution satellite images are being widely used in resource surveys, environmental monitoring, and vegetation management [1,2]

  • The results show that the model with the transfer-learning-based training strategy got 0.054802 dB (PSNR) and 0.004141 (SSIM) worse results under the

  • The results indicate that the dark channel prior (DCP) method removes cloud components insufficiently, and the results have large color deviations, which attribute to the discrepancy of imaging condition between clouds and haze

Read more

Summary

Introduction

With the rapid development of remote sensing technology, high-resolution satellite images are being widely used in resource surveys, environmental monitoring, and vegetation management [1,2]. Nearly 67% of the earth’s surface is covered by clouds [3], which reduces the availability of satellite images greatly. Thin clouds cover nearly the entirety of satellite images in the form of semi-translucent “white gauze”, which causes changes of spectral information and the blurring of ground features. Cloud removal is of great significance for improving the availability of high-resolution satellite images. Thick and thin clouds result in different applications being used on the images. Promoted by advanced computer vision methods [4,5,6,7], the accuracy of thick cloud detection is constantly being improved, but these methods [8,9,10,11] are not applicable to detecting semi-translucent

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call