Abstract

In the field of remote sensing image analysis, the issue of cloud interference in high-resolution images has always been a challenging problem, with traditional methods often facing limitations in addressing this challenge. To this end, this study proposes an innovative solution by integrating radiative feature analysis with cutting-edge deep learning technologies, developing a refined cloud segmentation method. The core innovation lies in the development of FFASPPDANet (Feature Fusion Atrous Spatial Pyramid Pooling Dual Attention Network), a feature fusion dual attention network improved through atrous spatial convolution pooling to enhance the model’s ability to recognize cloud features. Moreover, we introduce a probabilistic thresholding method based on pixel radiation spectrum fusion, further improving the accuracy and reliability of cloud segmentation, resulting in the “FFASPPDANet+” algorithm. Experimental validation shows that FFASPPDANet+ performs exceptionally well in various complex scenarios, achieving a 99.27% accuracy rate in water bodies, a 96.79% accuracy rate in complex urban settings, and a 95.82% accuracy rate in a random test set. This research not only enhances the efficiency and accuracy of cloud segmentation in high-resolution remote sensing images but also provides a new direction and application example for the integration of deep learning with radiative algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.