Recent years have witnessed significant advancements in machine perception, particularly in the context of self-driving vehicles. The accurate detection and interpretation of road signs by these vehicles are crucial for enhancing safety, intelligence, and efficiency on the roads. Consequently, there is a growing body of research dedicated to improving traffic sign recognition technologies, a key component of intelligent transportation systems. Annual statistics highlight numerous accidents attributable to factors such as excessive speed, variable lighting conditions, and the misinterpretation of traffic signs. In response to these challenges, a novel approach for the rapid and reliable recognition of traffic signs by moving vehicles has been developed. This approach leverages a custom dataset encompassing twelve object categories and seven subcategories, reflective of road sign diversities encountered in India. A specialized algorithm, TrafficSignNet, was devised to specifically identify signs related to speed, turning, zones, and bumps. This algorithm was trained on a comprehensive dataset comprising 4,962 images, with its performance evaluated using 705 images from real traffic scenarios. The evaluation demonstrates that the model achieves remarkable accuracy across various lighting conditions, processing up to 12 frames per second. This processing rate is compatible with the high-definition standards of contemporary vehicle cameras, which is 1280 × 720 pixels. The model's effectiveness is quantified through accuracy, precision, recall, and F1 score, with respective values of 0.985, 0.978, 0.964, and 0.971, showcasing its potential to significantly contribute to the advancement of smart transportation systems.
Read full abstract