Abstract

Removal of cloud cover from remote sensing satellite images is crucial to many optical remote sensing image data users because cloud cover can conceal important spatial information on an image data. This underscores the importance of making an informed choice in the selection of appropriate cloud cover detection and removal algorithms. For the purpose of large-scale training data, neural networks have been successful in many image processing tasks, but the use of neural networks to remove cloud occlusion in remote sensing imagery is still relatively evolving. The aim of this study is to evaluate the performance of two image restoration algorithms (The spatial attentive generative adversarial network and Convolutional autoencoder with symmetrical skip connection) used for the removal of cloud cover on remote sensing images. An open-source RICE dataset was used for the training and prediction of the models as each of them were implemented for the cloud cover removal. The evaluation metrics used to compare the models’ performance are the Structural similarity index ratio (SSIM), peak signal to noise ratio (PSNR), and the time taken for each model to complete its network training. After a successful completion of the network training using 80% of the data and the remaining 20% to test the networks, the spatial attentive generative adversarial network achieved the best performance on both the peak signal to noise ratio with a value of 26.3447 and the SSIM with a value of 0.8949 while the convolutional autoencoder generates a peak signal to noise ratio of 25.8257 and a SSIM of 0.6307. The result proves that SpaGAN is more effective for automatic cloud cover removal on remote sensing images and the improvement of the quality of the restored image when compared to CNN autoencoder.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call