Abstract

Improving traffic safety is one of the important goals of intelligent transportation systems. Traffic signs play a very vital role in safe driving and in avoiding accidents by informing the driver about the speed limits or possible dangers such as icy roads, imminent road works or pedestrian crossings. In-vehicle contextual Augmented reality (AR) has the potential to provide novel visual feedbacks to drivers for an enhanced driving experience. In this paper, we propose a new AR traffic sign recognition system (AR-TSR) to improve driving safety and enhance the driver’s experience based on the Haar cascade and the bag-of-visual-words approach, using spatial information to improve accuracy and an overview of studies related to the driver’s perception and the effectiveness of the AR in improving driving safety. In the first step, the region of interest (ROI) is extracted using a scanning window with a Haar cascade detector and an AdaBoost classifier to reduce the computational region in the hypothesis-generation step. Second, we proposed a new computationally efficient method to model global spatial distribution of visual words by taking into consideration the spatial relationships of its visual words. Finally, a multiclass sign classifier takes the positive ROIs and assigns a 3D traffic sign for each one using a linear SVM. Experimental results show that the suggested method could reach comparable performance of the state-of-the-art approaches with less computational complexity and shorter training time, and the AR-TSR more strongly impacts the allocation of visual attention during the decision-making phase.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call