Abstract

This paper presents a system for detection and recognition of traffic signs using 2D images and 3D scene data. The detection and recognition of the 3D structures (pole and signs) and the classification of the traffic signs is based on a new 3D features extraction method and the use of a Deep Learning method. The traffic sign recognition aims to increase road safety considering autonomous and semi-autonomous intelligent robotic vehicles in structured environments with traffic rules (urban streets or roads/highways). The proposed system can be used as an Advanced Driver Assistance System (ADAS) to help human drivers to increase safety and to respect the traffic rules while driving the car. It can also be adopted in fully autonomous vehicles, in the task of detecting traffic signs, making it possible to adapt the vehicle navigation control according to the local traffic rules. The system must be able to detect traffic signs, using 3D data in point clouds, and classify several different traffic signs, using 2D data considering colors, textures and shapes information (e.g. maximum speed allowed, stop, slow down, turn ahead, pedestrian crossing). The results are promising and very satisfactory, we obtained an accuracy of 97.64% in the 2D classification task and 76% accuracy in the single frame 3D detection task. These results were obtained using for testing a well-known dataset of street scenes with traffic signs, The KITTI Vision Benchmark Suite, and also another traffic sign benchmark dataset, the INI - German Traffic Sign Benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call