Abstract

Civil infrastructure condition assessment using visual recognition methods has shown significant potential for automating various aspects of the problem, including identification and localization of critical structural components, as well as detection and quantification of structural damage. The application of those methods typically requires large amounts of training data that consists of images and corresponding ground truth annotations. However, obtaining such datasets is challenging, because the images are annotated manually in most existing approaches. With the limited availability of datasets, development of effective visual recognition systems that can extract all required information is not straightforward. This research leverages synthetic environments to develop a unified system for automated vision-based structural condition assessment that can identify and localize critical structural components, and then detect and quantify damage of those components. The synthetic environments can produce images and associated ground truth annotations for semantic segmentation of structural components and damage, as well as monocular depth estimation for structural component localization. To illustrate the approach, automated vision-based structural condition assessment of reinforced concrete railway viaducts for a Japanese high-speed railway line (the Tokaido Shinkansen) is explored. The effectiveness of the synthetic environments and the generated dataset (the Tokaido dataset) is demonstrated by training fully convolutional network-based semantic segmentation and monocular depth estimation algorithms, and then testing the networks using both synthetic and real-world images. Finally, all trained algorithms are combined to realize an automated system for structural condition assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call