Abstract

The accurate reconstruction of areas obscured by clouds is among the most challenging topics for the remote sensing community since a significant percentage of images archived throughout the world are affected by cloud covers which make them not fully exploitable. The purpose of this paper is to propose new methods to recover missing data in multispectral images due to the presence of clouds by relying on a formulation based on an autoencoder (AE) neural network. We suppose that clouds are opaque and their detection is performed by dedicated algorithms. The AE in our methods aims at modeling the relationship between a given cloud-free image (source image) and a cloud-contaminated image (target image). In particular, two strategies are developed: the first one performs the mapping at a pixel level while the second one at a patch level to take profit from spatial contextual information. Moreover, in order to fix the problem of the hidden layer size, a new solution combining the minimum descriptive length criterion and a Pareto-like selection procedure is introduced. The results of experiments conducted on three different data sets are reported and discussed together with a comparison with reference techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call