Abstract
In this study, a regional convolutional neural network (RCNN)-based deep learning and Hough line transform (HLT) algorithm are applied to monitor corroded and loosened bolts in steel structures. The monitoring goals are to detect rusted bolts distinguished from non-corroded ones and also to estimate bolt-loosening angles of the identified bolts. The following approaches are performed to achieve the goals. Firstly, a RCNN-based autonomous bolt detection scheme is designed to identify corroded and clean bolts in a captured image. Secondly, a HLT-based image processing algorithm is designed to estimate rotational angles (i.e., bolt-loosening) of cropped bolts. Finally, the accuracy of the proposed framework is experimentally evaluated under various capture distances, perspective distortions, and light intensities. The lab-scale monitoring results indicate that the suggested method accurately acquires rusted bolts for images captured under perspective distortion angles less than 15° and light intensities larger than 63 lux.
Highlights
Bolts serve to connect structural components and to maintain the load-bearing performance of a steel structure
This paper presents a regional convolutional neural network (RCNN)-based deep learning and Hough line transform (HLT) algorithm to autonomously monitor bolt corrosion and loosening in steel structures
Lab-scale experiments were performed under three uncertain conditions to evaluate the feasibility of the RCNN-based bolt detector and HLT-based bolt angle estimation
Summary
Bolts serve to connect structural components and to maintain the load-bearing performance of a steel structure. Despite the above research efforts, there exists a need to improve vision-based methods for accurate detection of complex damage types such as corrosion and bolt-loosening in bolted connections. To this end, this paper presents a regional convolutional neural network (RCNN)-based deep learning and Hough line transform (HLT) algorithm to autonomously monitor bolt corrosion and loosening in steel structures. The accuracy of the proposed framework is experimentally evaluated under various capturing distances, perspective distortions, and light intensities
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.