Abstract

We propose in this work new strategies to reconstruct areas obscured by opaque clouds in multispectral images. They are based on an autoencoder (AE) neural network which opportunely models the relationships between a given source (cloud-free) image and a target (cloud-contaminated) image. The first strategy estimates the relationship model at a pixel level while the second one operates at a patch level in order profit from spatial contextual information. Experimental results obtained on FORMOSAT-2 images are reported and discussed together with a comparison with reference techniques.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call