Abstract

Flood events are often accompanied by rainy weather, which limits the applicability of optical satellite images, whereas synthetic aperture radar (SAR) is less sensitive to weather and sunlight conditions. Although remarkable progress has been made in flood detection using heterogeneous multispectral and SAR images, there is a lack of publicly available large-scale datasets and more efforts are required for exploiting deep neural networks in heterogeneous flood detection. This study constructed a pre-disaster Sentinel-2 and post-disaster Sentinel-1 heterogeneous flood mapping dataset named CAU-Flood containing 18 study plots with careful image preprocessing and human annotation. A new deep convolutional neural network (CNN), named cross-modal change detection network (CMCDNet), was also proposed for flood detection using multispectral and SAR images. The proposed network employs a encoder-decoder structure and performs feature fusion at multiple stages using gating and self-attention modules. Furthermore, the network overcomes the feature misalignment issue during decoding by embedding a feature alignment module in the upsampling operation. The proposed CMCDNet outperformed SOTA methods in terms of flood detection accuracy and achieved an intersection over union (IoU) of 89.84%. The codes and datasets are available at: https://github.com/CAU-HE/CMCDNet.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call