Abstract

We present a cloudy scene synthesis paradigm that produces cloudy images for arbitrary optical cloud-free observations. The synthesis paradigm consists of two fundamental operations, i.e., 1) cloud self-subtraction and 2) cloud addition-to-scene. Cloud self-subtraction extracts cloud ingredient images from cloudy images of weak texture regions (typically sea areas). The cloud ingredient images exhibit clouds in more realistic forms than simulated clouds. Cloud addition-to-scene incorporates the cloud ingredient images into arbitrary cloud-free land images, synthesizing cloudy scenes. It provides a means of constructing data pairs of cloud-free scene images and cloudy scene images, which are highly needed but considerably insufficient in the remote sensing literature. We refer to the overall paradigm consisting of the two fundamental operations as cloudy image arithmetic. We explore the use of the cloudy image arithmetic for the purpose of thin cloud removal. To this end, we develop a multi scale generative adversarial net (MSGAN) that removes thin clouds from cloudy scenes. We use the cloudy image arithmetic to construct a comprehensive training dataset for the MSGAN. Experimental evaluations validate that the cloudy image arithmetic synthesizes good cloudy scenes and the MSGAN with aid of the cloudy image arithmetic gives effective results in thin cloud removal.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.