Abstract

Urban flood mapping is essential for disaster rescue and relief missions, reconstruction efforts, and financial loss evaluation. Much progress has been made to map the extent of flooding with multi-source remote sensing imagery and pattern recognition algorithms. However, urban flood mapping at high spatial resolution remains a major challenge due to three main reasons: (1) the very high resolution (VHR) optical remote sensing imagery often has heterogeneous background involving various ground objects (e.g., vehicles, buildings, roads, and trees), making traditional classification algorithms fail to capture the underlying spatial correlation between neighboring pixels within the flood hazard area; (2) traditional flood mapping methods with handcrafted features as input cannot fully leverage massive available data, which requires robust and scalable algorithms; and (3) due to inconsistent weather conditions at different time of data acquisition, pixels of the same objects in VHR optical imagery could have very different pixel values, leading to the poor generalization capability of classical flood mapping methods. To address this challenge, this paper proposed a residual patch similarity convolutional neural network (ResPSNet) to map urban flood hazard zones using bi-temporal high resolution (3m) pre- and post-flooding multispectral surface reflectance satellite imagery. Besides, remote sensing specific data augmentation was also developed to remove the impact of varying illuminations due to different data acquisition conditions, which in turn further improves the performance of the proposed model. Experiments using the high resolution imagery before and after the 2017 Hurricane Harvey flood in Houston, Texas, showed that the developed ResPSNet model, along with associated remote sensing specific data augmentation method, can robustly produce flood maps over urban areas with high precision (0.9002), recall (0.9302), F1 score (0.9128), and overall accuracy (0.9497). The research sheds light on multitemporal image fusion for high precision image change detection, which in turn can be used for monitoring natural hazards.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call