Abstract

Abstract. Traditional bridge inspections present considerable challenges in terms of efficiency and accuracy. However, recent advancements in Unmanned Aerial Systems (UASs) and deep learning for object detection have opened up new avenues for automatic bridge damage detection. We present a comprehensive framework leveraging these technologies for automated damage detection in UAS imagery, followed by accurate mapping of the damage predictions on photogrammetric models. In this work, we propose a photogrammetric procedure to retrieve geolocated bridge models solely based on Real-Time Kinematics (RTK) information. Within the damage detection step, we conduct extensive testing and optimization of model hyperparameters using YOLOv8 and Slicing Aided Hyper Interference (SAHI). Next, we map the predictions onto the 3D model using ray casting, allowing to group and filter the predictions by their area and position. Finally, a Graphical User Interface (GUI) allows bridge inspectors to identify false positive predictions, generate new training data, and directly measure damage dimensions in the images. Validation on a concrete box girder bridge resulted in a photogrammetric model with a mean error of 1.3 cm, negating the need for ground control points. Our model training process revealed substantial performance variations between training and test datasets, underscoring the importance of evaluating optimal hyperparameters on UAS inspection images rather than relying on the validation metrics. Lastly, we successfully map the detected damage and create new training data from the UAS inspection images. This framework significantly enhances bridge inspection accuracy and efficiency, providing a strong foundation for future developments in automated bridge inspections.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call