Abstract
In computer vision, timely and accurate execution of object identification tasks is critical. However, present road damage detection approaches based on deep learning suffer from complex models and computationally time-consuming issues. To address these issues, we present a lightweight model for road damage identification by enhancing the YOLOv5s approach. The resulting algorithm, YOLO-LRDD, provides a good balance of detection precision and speed. First, we propose the novel backbone network Shuffle-ECANet by adding an ECA attention module into the lightweight model ShuffleNetV2. Second, to ensure reliable detection, we employ BiFPN rather than the original feature pyramid network since it improves the network's capacity to describe features. Moreover, in the model training phase, localization loss is modified to Focal-EIOU in order to get higher-quality anchor box. Lastly, we augment the well-known RDD2020 dataset with many samples of Chinese road scenes and compare YOLO-LRDD against several state-of-the-art object detection techniques. The smaller model of our YOLO-LRDD offers superior performance in terms of accuracy and efficiency, as determined by our experiments. Compared to YOLOv5s in particular, YOLO-LRDD improves single image recognition speed by 22.3% and reduces model size by 28.8% while maintaining comparable accuracy. In addition, it is easier to implant in mobile devices because its model is smaller and lighter than those of the other approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.