Abstract

The tremendous success of automated methods for the detection of damage in images of civil infrastructure has been fueled by exponential advances in deep learning over the past decade. In particular, many efforts have taken place in academia and more recently in industry that demonstrate the success of supervised deep learning methods for semantic segmentation of damage (i.e., the pixel-wise identification of damage in images). However, in graduating from the detection of damage to applications such as inspection automation, efforts have been limited by the lack of large open datasets of real-world images with annotations for multiple types of damage, and other related information such as material and component types. Such datasets for structural inspections are difficult to develop because annotating the complex and amorphous shapes taken by damage patterns remains a tedious task (requiring too many clicks and careful selection of points), even with state-of-the art annotation software. In this work, InstaDam—an open source software platform for fast pixel-wise annotation of damage—is presented. By utilizing binary masks to aid user input, InstaDam greatly speeds up the annotation process and improves the consistency of annotations. The masks are generated by applying established image processing techniques (IPTs) to the images being annotated. Several different tunable IPTs are implemented to allow for rapid annotation of a wide variety of damage types. The paper first describes details of InstaDam’s software architecture and presents some of its key features. Then, the benefits of InstaDam are explored by comparing it to the Image Labeler app in Matlab. Experiments are conducted where two employed student annotators are given the task of annotating damage in a small dataset of images using Matlab, InstaDam without IPTs, and InstaDam. Comparisons are made, quantifying the improvements in annotation speed and annotation consistency across annotators. A description of the statistics of the different IPTs used for different annotated classes is presented. The gains in annotation consistency and efficiency from using InstaDam will facilitate the development of datasets that can help to advance research into automation of visual inspections.

Highlights

  • Visual inspections to assess the condition of civil infrastructure are time-consuming, repetitive, and put inspectors at high levels of risk

  • The data generated from the conducted experiments is analyzed and 4.1a.dIdmiptrioovneamleqnutsailnitEaftfiivcieenicmy provements of the use of InstaDam are presented

  • When InstaDam is used without image processing techniques (IPTs), the annotation speed is slightly faster than Matlab, suggesting that the software functionality is on par with existing annotation software

Read more

Summary

Introduction

Visual inspections to assess the condition of civil infrastructure are time-consuming, repetitive, and put inspectors at high levels of risk. Different approaches have been examined for these inspection tasks, including (i) object detection, where a bounding-box is drawn around the damaged region [5,6,7,8,9], and (ii) semantic segmentation [1,10,11,12,13], where each pixel is classified as a certain damage type. While these methods have shown tremendous success, researchers often develop their own private datasets for each new study, making rigorous benchmarking and comparisons of advances in deep learning architectures difficult. The availability of similar large open datasets for semantic segmentation of damage in civil infrastructure will facilitate advances towards automation of civil infrastructure inspection

Objectives
Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call