Abstract

In the application of driverless technology, current traffic sign recognition methods are susceptible to the influence of ambient light interference, target size changes and complex backgrounds, resulting in reduced recognition accuracy. To address these challenges, this study introduces an optimisation algorithm called ETSR-YOLO, which is based on the YOLOv5s algorithm. First, this study improves the path aggregation network (PANet) of YOLOv5s to enhance multi-scale feature fusion by generating an additional high-resolution feature layer to improve the recognition of YOLOv5s for small-sized objects. Second, the study introduces two improved C3 modules that aim to suppress background noise interference and enhance the feature extraction capabilities of the network. Finally, the study uses the Wise-IoU (WIoU) function in the post-processing stage to improve the learning ability and robustness of the algorithm to different samples. The experimental results show that ETSR-YOLO improves mAP@0.5 by 6.6% on the Tsinghua-Tencent 100K (TT100K) dataset and by 1.9% on the CSUST Chinese Traffic Sign Detection Benchmark 2021 (CCTSDB2021) dataset. In the experiments conducted on the embedded computing platform, ETSR-YOLO demonstrates a short average inference time, thereby affirming its capability to deliver dependable traffic sign detection for intelligent vehicles operating in real-world traffic scenes. The source code and test results of the models used in this study are available at https://github.com/cbrook16/ETSR-YOLO.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call