Abstract

In satellite optical images, clouds are normally exhibited at different scales with various boundaries. In order to accurately capture the variable visual forms of clouds, we present a deep learning based strategy, i.e., Boundary Nets, which generates a cloud mask for detecting clouds in one cloudy image. The Boundary Nets consist of two nets, i.e., (a) a scalable-boundary net, and (b) a differentiable-boundary net. The scalable-boundary net extracts multi-scale features from a cloudy image, and comprehensively characterizes clouds with variable boundary scales by a multi-scale fusion module. The multi-scale feature extraction and multi-scale fusion consistently capture clouds of different sizes, generating a multi-scale cloud mask for the cloudy image. The differentiable-boundary net characterizes the difference between the multi-scale cloud mask and the ground truth cloud mask by a residual architecture. It generates a difference cloud mask that is a complement of boundary details to the multi-scale cloud mask. Finally, the overall cloud mask is obtained by fusing the multi-scale cloud mask and the difference cloud mask. In the training process, multiple key parts of the Boundary Nets access supervision information in a distributed manner, and the losses are summed up for an overall training computation. Such distributed, overall supervision not only avoids training the two nets separately but also tightly couples the two nets into an overall framework. The experimental results validate that our Boundary Nets perform well and achieve outstanding results. The code for implementing the proposed boundary nets is available at https://gitee.com/kang_wu/boundary-nets.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.