Abstract

This work proposes an image processing pipeline for detecting floodwater extent on inundated roadways from image data captured and generated by mobile consumer devices, such as smartphones. A sample data-set collected from actual nuisance flooding events in Norfolk, VA and location-matched reference images are used to demonstrate the proposed approach. The highly variable nature of crowdsourced data manifests as discrepancies in location-matched dry/flooded condition image pairs, making extracting inundation information more challenging. These discrepancies may include differences in resolution, lighting and environmental conditions. Scenes may include dynamic objects, such as vehicles and pedestrians, on the roadway. In the proposed pipeline, images go through a set of pre-processing operations consisting of water edge detection, image inpainting and contrast correction. A Region-Based Convolutional Neural Network (R-CNN) is trained, tested and deployed for vehicle detection. An inpainting procedure removes vehicles detected by the R-CNN. The images are registered using the Scale Invariant Feature Transform flow algorithm. Boundaries of the flooded area are detected. Reflections of landmarks and sky/clouds also pose an important challenge to detection of inundated areas. Reflections from nearby landmarks are first identified, then used as a seed for identifying the remaining water body, including reflections of sky/clouds, through saturation channel processing. The result is further processed with the detected water edge lines. False positives are removed. The proposed method is applied to real-world images and its accuracy is evaluated. The results show that the method produces satisfactory results despite the complexities of the crowdsourced image data and dynamic environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call