Abstract

AbstractIn the past few decades, the advanced driver assistance systems (ADAS) have achieved great advances. Many computer vision based techniques have been proposed for traffic scene understanding using on-board cameras. One important task is detection and recognition of traffic signs to provide the road conditions for drivers. In this paper, a two-stage approach is proposed for traffic sign detection and classification with real scene images. In the first detection network, we adopt Faster R-CNN to detect the locations of traffic signs. The parameter setting is designed to achieve a very low miss rate at the cost of increasing false positives. This is then passed to the classification networks with ResNet, VGG and SVM for traffic sign validation. The public dataset TT100K and the images collected from Taiwan road scenes are used for network training and testing. Our proposed technique is carried out the videos acquired from highway, suburb and urban scenarios. The experimental results obtained using Faster R-CNN for detection combined with VGG for classification have demonstrated its superior performance compared to YOLOv3 and Mask R-CNN.KeywordsTraffic sign detectionTraffic sign classificationAdvanced driver assistance systems (ADAS)Two-stage network

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.