Accurate detection and classification of traffic signs play a vital role in ensuring driver safety and supporting advancements in autonomous driving technology. This paper introduces a novel approach for traffic sign detection and recognition by integrating the Faster RCNN and YOLOX-Tiny models using a stacking ensemble technique. The innovative ensemble methodology creatively merges the strengths of both models, surpassing the limitations of individual algorithms and achieving superior performance in challenging real-world scenarios. The proposed model was evaluated on the CCTSDB dataset and the MTSD dataset, demonstrating competitive performance compared to traditional algorithms. All experiments were conducted using Python 3.8 on the same system equipped with an NVIDIA GTX 3060 12G graphics card. Our results show improved accuracy and efficiency in recognizing traffic signs in various real-world scenarios, including distant, close, complex, moderate, and simple settings, achieving a 4.78% increase in mean Average Precision (mAP) compared to Faster RCNN and improving Frames Per Second (FPS) by 8.1% and mAP by 6.18% compared to YOLOX-Tiny. Moreover, the proposed model exhibited notable precision in challenging scenarios such as ultra-long-distance detections, shadow occlusions, motion blur, and complex environments with diverse sign categories. These findings not only showcase the model’s robustness but also serve as a cornerstone in propelling the evolution of autonomous driving technology and sustainable development of future transportation. The results presented in this paper could potentially be integrated into advanced driver-assistance systems and autonomous vehicles, offering a significant step forward in enhancing road safety and traffic management.
Read full abstract