Abstract

The detection and recognition of traffic signs is an important topic in intelligent transportation systems. The automatic detection and recognition of traffic signs during driving is the basis for realizing the unmanned driving. Therefore, the work on the detection and recognition of traffic signs has a potential value and application prospect. In the traditional detection and recognition methods, they often detect and recognize traffic signs image by image. In this case, only the information of the current image is used, and the relationship between the image sequences is not considered. To end this issue, we propose a novel model that can use the relationship in multi-images to detect and recognize traffic signs in a driving video sequence quickly and accurately. The model proposed in this paper is a fusion model based on YOLO-V3 and VGG19 network. Finally, we test this proposed model on a public dataset and compare it to the baseline method, and results show that this proposed model achieves accuracy over 90% and outperforms the baseline method for all types of traffic signs in different conditions. Thus, we can conclude this proposed model is efficient and accurate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call