Abstract

To resolve the challenges of low detection accuracy and inadequate real-time performance in road scene detection, this article introduces the enhanced algorithm SDG-YOLOv5. The algorithm incorporates the SIoU Loss function to accurately predict the angle loss of bounding boxes, ensuring their directionality during regression and improving both regression accuracy and convergence speed. A novel lightweight decoupled heads (DHs) approach is employed to separate the classification and regression tasks, thereby avoiding conflicts between their focus areas. Moreover, the Global Attention Mechanism Group Convolution (GAMGC), a lightweight strategy, is utilized to enhance the network’s capability to process additional contextual information, thereby improving the detection of small targets. Extensive experimental analysis on datasets from Udacity Self Driving Car, BDD100K, and KITTI demonstrates that the proposed algorithm achieves improvements in mAP@.5 of 2.2%, 3.4%, and 1.0% over the original YOLOv5, with a detection speed of 30.3 FPS. These results illustrate that the SDG-YOLOv5 algorithm effectively addresses both detection accuracy and real-time performance in road scene detection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call