Abstract

Traffic signs play a very vital role in safe driving and in avoiding accidents by informing the driver about the speed limits or possible dangers such as icy roads, imminent road works or pedestrian crossings. Considering the processing time and classification accuracy as a whole, a novel approach for visual words construction was presented, which takes the spatial information of keypoints into account in order to enhance the quality of visual words generated from extracted keypoints using the distance and angle information in the Bags of Visual Words (BoVW) representation. In this paper, we proposed a new computationally efficient method to model global spatial distribution of visual words by taking into consideration the spatial relationships of its visual words. In the first step, the region of interest is extracted using a scanning window with a Haar cascade detector and an AdaBoost classifier to reduce the computational region in the hypothesis generation step. Second, the regions are represented with BoVW and spatial information for classification. Experimental results show that the suggested method could reach comparable performance of the state-of-the-art approaches with less computational complexity and shorter training time. It clearly demonstrates the complementarity of the additional relative spatial information provided by our approach to improve accuracy while maintaining short retrieval time, and can obtain a better traffic sign recognition accuracy than the methods based on the traditional BoVW model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call