Abstract
The synthesis of realistic data using deep learning techniques has greatly improved the performance of classifiers in handling incomplete data. Remote sensing applications that have profited from those techniques include translating images of different sensors, improving the image resolution and completing missing temporal or spatial data such as in cloudy optical images. In this context, this letter proposes a new deep-learning-based framework to synthesize missing or corrupted multispectral optical images using multimodal/multitemporal data. Specifically, we use conditional generative adversarial networks (cGANs) to generate the missing optical image by exploiting the correspondent synthetic aperture radar (SAR) data with a SAR-optical data from the same area at a different acquisition date. The proposed framework was evaluated in two land-cover applications over tropical regions, where cloud coverage is a major problem: crop recognition and wildfire detection. In both applications, our proposal was superior to alternative approaches tested in our experiments. In particular, our approach outperformed recent cGAN-based proposals for cloud removal, on average, by 7.7% and 8.6% in terms of overall accuracy and F1-score, respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.