Abstract

Clouds constitute a major obstacle to the application of optical remote-sensing images as they destroy the continuity of the ground information in the images and reduce their utilization rate. Therefore, cloud detection has become an important preprocessing step for optical remote-sensing image applications. Due to the fact that the features of clouds in current cloud-detection methods are mostly manually interpreted and the information in remote-sensing images is complex, the accuracy and generalization of current cloud-detection methods are unsatisfactory. As cloud detection aims to extract cloud regions from the background, it can be regarded as a semantic segmentation problem. A cloud-detection method based on deep convolutional neural networks (DCNN)—that is, a spatial folding–unfolding remote-sensing network (SFRS-Net)—is introduced in the paper, and the reason for the inaccuracy of DCNN during cloud region segmentation and the concept of space folding/unfolding is presented. The backbone network of the proposed method adopts an encoder–decoder structure, in which the pooling operation in the encoder is replaced by a folding operation, and the upsampling operation in the decoder is replaced by an unfolding operation. As a result, the accuracy of cloud detection is improved, while the generalization is guaranteed. In the experiment, the multispectral data of the GaoFen-1 (GF-1) satellite is collected to form a dataset, and the overall accuracy (OA) of this method reaches 96.98%, which is a satisfactory result. This study aims to develop a method that is suitable for cloud detection and can complement other cloud-detection methods, providing a reference for researchers interested in cloud detection of remote-sensing images.

Highlights

  • With the rapid development of satellite remote-sensing technology, satellite remotesensing images are playing an increasingly important role in the production and life of today’s society

  • The traditional pooling layers are replaced by folding layers; in the decoding part, the traditional upsampling layers are replaced by unfolding layers

  • Compared with other neural network structures, the convolutional neural networks (CNN) architecture is more suitable for processing images, because its input and hidden layer structure consists of threedimensional neuron layers that are well-adapted to multichannel image data processing

Read more

Summary

Introduction

With the rapid development of satellite remote-sensing technology, satellite remotesensing images are playing an increasingly important role in the production and life of today’s society. Fields such as industry, agriculture, and the service industry cannot develop well without the support of satellite remote-sensing data [1,2,3,4,5]. As the cloud obscures the ground information, a large amount of invalid data in remote-sensing images is produced, occupying too much storage space and transmission bandwidth. Accurate cloud masks can be created by manual interpretation, the massive amount of remote-sensing data is obviously not suitable for this time-consuming manual operation.

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call