Abstract

ABSTRACT Cloud and cloud shadow segmentation of satellite imageries is a prerequisite for many remote sensing applications. Due to the limited number of available spectral bands and the complexity of background information, the traditional detection methods have some problems such as false detection, missing detection and inaccurate boundary information in segmentation. To solve these problems, a global attention fusion residual network method is proposed to segment cloud and cloud shadow of satellite imageries. The proposed model adopts Residual Network (ResNet) as backbone to extract semantic information at different feature levels. In order to improve the ability of the network to deal with the boundary information, an improved atrous spatial pyramid pooling method is introduced to extract the multi-scale deep semantic information. Then, the deep semantic information is fused with the shallow spatial information through the Global Attention up-sample mechanism in different scales, which improves the network’s ability to utilize the global and local features. Finally, a boundary refinement module is utilized to predict the boundary of cloud and shadow, consequently the boundary information is refined. The experimental results on Sentinel-2 satellite and Land Remote-Sensing Satellite (Landsat) imageries show that the segmentation accuracy and speed of proposed method are superior to the existing methods, it is of great significance for realizing practical cloud and shadow segmentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call