Abstract

Increase in the number of vehicles on road necessitates the use of automated systems for driver assistance. These systems form important components of self-driving vehicles also. Traffic Sign Recognition system is such an automated system which provides contextual awareness for the self-driving vehicle. CNN based methods like Faster R-CNN for object detection provide human level accuracy and real time performance and are proven successful in Traffic Sign Recognition systems [1]. Single stage detection systems such as YOLO [2] and SSD [3], despite offering state-of-the-art realtime detection speed, are not preferred for traffic sign detection problem due to its reduced accuracy and small object detection issues. RetinaNet has shown promising results with respect to accuracy and speed required for object detection problems. It uses Focal Loss [4] and Feature Pyramid Network (FPN) [5] for tackling the low accuracy and small object detection problems. In this paper, an approach for traffic sign recognition system for self-driving cars based on RetinaNet is presented with comparative analysis of its performance with Faster R-CNN based sign detector [1] and YOLOv3 [9] based detector. RetinaNet forms the traffic sign detection network and a CNN-based classifier forms the traffic sign class recognizer. The network training and evaluation are done using the German Traffic Sign Detection Benchmark (GTSDB) [6] dataset and the classifier performance is verified using German Traffic Sign Recognition Benchmark (GTSRB) [7] dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call