Abstract

Cloud detection in remote sensing images is a challenging but significant task. Due to the variety and complexity of underlying surfaces, most of the current cloud detection methods have difficulty in detecting thin cloud regions. In fact, it is quite meaningful to distinguish thin clouds from thick clouds, especially in cloud removal and target detection tasks. Therefore, we propose a method based on multiscale features-convolutional neural network (MF-CNN) to detect thin cloud, thick cloud, and noncloud pixels of remote sensing images simultaneously. Landsat 8 satellite imagery with various levels of cloud coverage is used to demonstrate the effectiveness of our proposed MF-CNN model. We first stack visible, near-infrared, short-wave, cirrus, and thermal infrared bands of Landsat 8 imagery to obtain the combined spectral information. The MF-CNN model is then used to learn the multiscale global features of input images. The high-level semantic information obtained in the process of feature learning is integrated with low-level spatial information to classify the imagery into thick, thin and noncloud regions. The performance of our proposed model is compared to that of various commonly used cloud detection methods in both qualitative and quantitative aspects. Compared to other cloud detection methods, the experimental results show that our proposed method has a better performance not only in thick and thin clouds but also in the entire cloud regions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call