Many parts of the world experience severe episodes of flooding every year. In addition to the high cost of mitigation and damage to property, floods make roads impassable and hamper community evacuation, movement of goods and services, and rescue missions. Knowing the depth of floodwater is critical to the success of response and recovery operations that follow. However, flood mapping especially in urban areas using traditional methods such as remote sensing and digital elevation models (DEMs) yields large errors due to reshaped surface topography and microtopographic variations combined with vegetation bias. This paper presents a deep neural network approach to detect submerged stop signs in photos taken from flooded roads and intersections, coupled with Canny edge detection and probabilistic Hough transform to calculate pole length and estimate floodwater depth. Additionally, a tilt correction technique is implemented to address the problem of sideways tilt in visual analysis of submerged stop signs. An in-house dataset, named BluPix 2020.1 consisting of paired web-mined photos of submerged stop signs across 10 FEMA regions (for U.S. locations) and Canada is used to evaluate the models. Overall, pole length is estimated with an RMSE of 17.43 and 8.61 in. in pre- and post-flood photos, respectively, leading to a mean absolute error of 12.63 in. in floodwater depth estimation. Findings of this research are sought to equip jurisdictions, local governments, and citizens in flood-prone regions with a simple, reliable, and scalable solution that can provide (near-) real time estimation of floodwater depth in their surroundings.
Read full abstract