Abstract

Clouds are one of the major limitations to crop monitoring using optical satellite images. Despite all efforts to provide decision-makers with high-quality agricultural statistics, there is still a lack of techniques to optimally process satellite image time series in the presence of clouds. In this regard, in this article it was proposed to add a Multi-Layer Perceptron loss function to the pix2pix conditional Generative Adversarial Network (cGAN) objective function. The aim was to enforce the generative model to learn how to deliver synthetic pixels whose values were proxies for the spectral response improving further crop type mapping. Furthermore, it was evaluated the generalization capacity of the generative models in producing pixels with plausible values for images not used in the training. To assess the performance of the proposed approach it was compared real images with synthetic images generated with the proposed approach as well as with the original pix2pix cGAN. The comparative analysis was performed through visual analysis, pixel values analysis, semantic segmentation and similarity metrics. In general, the proposed approach provided slightly better synthetic pixels than the original pix2pix cGAN, removing more noise than the original pix2pix algorithm as well as providing better crop type semantic segmentation; the semantic segmentation of the synthetic image generated with the proposed approach achieved an F1-score of 44.2%, while the real image achieved 44.7%. Regarding the generalization, the models trained utilizing different regions of the same image provided better pixels than models trained using other images in the time series. Besides this, the experiments also showed that the models trained using a pair of images selected every three months along the time series also provided acceptable results on images that do not have cloud-free areas.

Highlights

  • Introduction iationsTo ensure global food security, which is one of the seventeen sustainable development goals defined by the United Nations to be accomplished by 2030, the United Nations [1]states that is essential to decrease food loss and waste, as well as increase sustainable agriculture production [2,3]

  • Investigate whether extending the original pix2pix conditional Generative Adversarial Network (cGAN) objective function with a custom loss function that minimizes the distance between the semantic segmentation of the real and synthetic images, could deliver synthetic pixels that improve crop type mapping with optical remote sensing images covered by clouds and cloud shadows; Evaluate the generalization for generative models, meaning, whether models trained in few images selected along the time series could provide suitable synthetic pixels for cloud-covered areas on other images along the same satellite image time series

  • It was proposed to add an Multi-Layer Perceptron (MLP) loss function to the pix2pix cGAN objective function aiming to minimize the distance between the semantic segmentation for the real and the synthetic images during the training generative models to deliver high quality synthetic pixels to cloud-covered areas in optical images

Read more

Summary

Introduction

Introduction iationsTo ensure global food security, which is one of the seventeen sustainable development goals defined by the United Nations to be accomplished by 2030, the United Nations [1]states that is essential to decrease food loss and waste, as well as increase sustainable agriculture production [2,3]. The remote sensing community has been making efforts to develop methods to extract agricultural statistics from remote sensing images [4,5]. Researchers and groups such as the Joint. The main goal of a GAN is to improve G until D cannot tell if generated images are real or synthetic. To this end, the models are trained in an adversarial manner in a zero-sum game, trying to find the optimal mapping function, Equation (2):

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call