Abstract
Object detection is a crucial foundation in the field of autonomous driving. Therefore, accurate and real-time detection of road objects, including vehicles and pedestrians, is essential for the success of autonomous vehicles. Convolutional neural networks represented by YOLOv8 have made significant progress in autonomous driving. However, there are also various problems, such as false recognition, missed detection, and susceptibility to complex weather. Aiming at the problem of low traffic sign detection effect in automatic driving scenes, an improved YOLOv8 automatic driving target detection algorithm is proposed. Firstly, Mosaic data augmentation and Multi-Path Attention Mechanism (MPAM) are introduced based on the YOLOv8 object detection model. The improved model can obtain location information from the feature map in the early stages of recognition, enabling the model to focus more on the regions of interest. Secondly, because the model pays different attention to global and local features, to make the model pay more attention to fine-grained small targets, so a complex bidirectional fusion structure is proposed in the neural network. Finally, we comprehensively consider the ratio of predicted boxes to ground-truth. The CIoU-tune loss function replaces the original IoU loss function, minimizing the height and width discrepancies between predicted and actual bounding boxes. This improvement accelerates the convergence speed of the model, achieving better localization results. After undergoing ablation analysis, the YOLO-MPAM method is compared with other methods on dataset TT100K. The mAP50 value of the improved algorithm is 86.1%, showing a 9.6% improvement compared to YOLOv8n. Additionally, the model achieves higher accuracy in detecting small objects, effectively enhancing the detection performance of the model at the same inference speed.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.