Abstract
The You Only Look Once (YOLO) network is considered highly suitable for real-time object detection tasks due to its characteristics, such as high speed, single-shot detection, global context awareness, scalability, and adaptability to real-world conditions. This work introduces a comprehensive analysis of various YOLO models for detecting cracks in concrete structures, aiming to assist in the selection of an optimal model for future detection and segmentation tasks. The YOLO models are initially trained on a dataset containing both images with and without cracks, producing a generalized model capable of extracting abstract features beneficial for crack detection. Subsequently, transfer learning is employed using a dataset that reflects real-world conditions, such as occlusions, varying crack sizes, and rotations, to further refine the model. Crack detection in concrete remains challenging due to the wide variation in crack sizes, aspect ratios, and complex backgrounds. To achieve optimal performance, we test different versions of YOLO, a state-of-the-art single-shot detector, and aim to balance inference speed and mean average precision (mAP). Our results indicate that YOLOv10 demonstrates superior performance, achieving a mean average precision (mAP) of 74.52% with an inference time of 19.5 milliseconds per image, making it the most effective among the models tested.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.