Abstract

Recent major accidents related to bridges have emphasized the need for developing effective technological solutions for defect detection, which can minimize the possibility of bridge-related accidents in the future. In this respect, this research will focus towards development of automated system for the detection of defective regions within different steel parts of bridges. At present, there is no open-source image dataset, which can be used for this purpose. Consequently, the training dataset has been developed by using images acquired from bridges in Vietnam and validation was performed using images acquired from Lovelock bridge situated at Highway-80, Lovelock, NV, USA. A total of 5,500 (4,000 images for training and 1,500 for validation) images of different dimensions have been used the original dimensions of the steel bridge images have been modified 572 × 572 pixels, which have been used for the training and evaluation of the dataset on different Deep Encoder-Decoder networks. The use of diverse data from different bridges will allow the development of a robust Deep Encoder-Decoder network with considerable implications for practical systems in the future. This study will employ state-of-the-art Deep Encoder-Decoder network, which have been recently developed for other applications. However, no such study has been developed for defect detection in steel bridges. A comparative evaluation of different Deep Encoder-Decoder networks will be examined. At the same time, the performance of the system will be compared with recent advanced approaches. The results reveal the considerable potential of Deep Encoder-Decoder towards defect detection of steel bridges, which will be further exploited in the future studies.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.