Abstract

This paper aims to assess the effectiveness of various object detection high-level architectures, including Faster R-CNN, R-FCN, SSD, and YOLO, in recognizing traffic signs. Since traffic sign recognition is a critical element of driver-assistance and autonomous driving systems, this study explores how five distinct architectures for feature extracting (ResNet V1 50, ResNet V1 101, Inception V2, Inception-ResNet-V2, and Darknet-19) can enhance the performance of these meta-architectures. To evaluate the models, the authors fine-tuned a pre-trained object recognition algorithm on the German Traffic Sign Detection Benchmark (GTSDB) dataset. While the Faster R-CNN Inception Resnet v2 proved to be the most accurate among the models, the unbalanced distribution of mandatory, prohibitory, and danger signs in the GTSDB dataset may have caused biased results in object detection. Overall, the result obtained from this study can help improve the accuracy of driver-assistance and autonomous driving systems by providing insights into the performance of various object detection models in recognizing traffic signs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call