Abstract

Highways are an important component of any country. However, some highways in Indonesia endanger users while maintaining road safety. Crack detection early in the deterioration process can prevent further damage and lower maintenance costs. A recent study sought to develop a method for detecting road damage by combining the road damage detection (RDD) dataset with generative adversarial network technology and data augmentation to improve training. The current study aims to broaden the you only look once (YOLO) framework by incorporating the Swin Transformer into the chiral stationary phases (CSP) component of YOLOv7, with the goal of improving object detection accuracy in a variety of visual scenarios. The study compares the performance of various object detection models with varying parameters and configurations, such as YOLOv5l, YOLOv6l, YOLOv7-tiny, YOLOv7, and YOLOv7x. YOLOv5l has 46 million parameters and 108 billion floating point operations per second (FLOPS), whereas YOLOv6l has 59.5 million parameters and 150 billion FLOPS. With 31 million parameters and 140 billion FLOPS, the YOLOv7-swin model performs best with mean average precision (mAP), mAP_0.50 of 0.47. and mAP_0.5:0.95 of 0.232. The experimental results show that our YOLOv7-swin model outperforms both YOLOv7x and YOLOv7-tiny. The proposed model significantly improves object detection accuracy while keeping complexity and performance in balance.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.