Abstract

Cloud detection is one of the key technologies in the field of remote sensing. Although extensive deep learning-based cloud detection methods achieve good performance, their detection results in confusing areas such as cloud boundaries and thin clouds are often not satisfactory due to the potential inter-class similarity and intra-class inconsistency of objects. To this end, we propose a Boundary-Aware Bilateral Fusion network (BABFNet), which effectively enhances cloud detection in confusing areas by introducing a boundary prediction branch as an auxiliary. To avoid the loss of details, the boundary prediction branch is designed to run at full resolution with a shallow architecture, while some Semantic Enhancement Modules (SEMs) are used to supplement high-level semantic information by introducing multi-level encoder features of the cloud detection branch. This feature sharing in turn drives the cloud detection branch to focus more on cloud boundaries during training. At the end of the network, a Bilateral Fusion Module (BFM) is added for information complementarity between features from these two branches. The features from the cloud detection branch provide multi-scale features to the boundary prediction branch for more accurate boundary prediction, while the features from the boundary prediction branch further serve as prior knowledge to help the cloud detection branch aggregate contextual information. To verify the effectiveness of the proposed method, we select four different networks as cloud detection branches and conduct comparative experiments on two public datasets, GF-1 WFV and MODIS. The experimental results show that the proposed method significantly enhances cloud detection in confusing areas.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call