Abstract

ABSTRACTHigh-precision cloud detection is a key step in the processing of remote sensing imagery. However, the existing cloud detection methods struggle to extract high-accuracy cloud pixels, especially for images of thin and fragmented clouds or those over high-brightness surfaces. In this study, we developed a new model by combining the existing models of Fully Convolutional Network-8 sample (FCN-8s) and U-network (U-net) (based on the three visible bands) to take full advantage of spectral and spatial information. In the proposed Fully Convolutional Network Ensembling Learning (FCNEL) model, U-net and FCN-8s initially conduct separate classifications based on their relative strengths, and their outputs are fused by the voting strategy to integrate multi-scale features from both the models. Different surface and cloud types in Landsat 8 Operational Land Imager (OLI) data were used to test the model, which showed an average overall accuracy of 91.68% and an average producer accuracy of 98.52%. Thus, the proposed FCNEL model was superior to FCN-8s or U-net as well as the widely used function of mask algorithm. The proposed method has good adaptability to various cloud types and diverse underlying surface environments.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.