Abstract

Cloud and cloud shadow detection in remote sensing imagery is important for its wide range of applications. Traditionally, the detection is usually based on the manually designed thresholds from multiband, which is complicated and of multistage. To simplify the process of cloud and cloud shadow detection and improve the performance, we propose a multilevel feature fused segmentation network (MFFSNet), which can be trained end-to-end without any hand-tuned parameters. Specifically, a fully convolutional network is proposed for cloud and cloud shadow features learning. Then, we utilize a novel pyramid pooling module to extract contextual relation between cloud and shadow. Furthermore, a special multilevel feature fused structure is designed to combine semantic information with spatial information from different levels, so that we can better handle the multiscale objects and produce detailed segmentation boundaries. Experiments show that the MFFSNet outperforms the state-of-the-art methods and achieves high accuracies of 98.69% and 98.92% for cloud and cloud shadow detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call