Abstract

Road sign recognition plays a great role in automatic driving. At present, for the task of road sign recognition, the structure related to CNN is mainly used to classify the detected road signs. However, due to the complexity of road conditions and weather, the detected images often contain complex background pixels, which affects the accuracy of classification. Due to the release of segment anything model, there is a powerful tool for eliminating background pixels quickly and conveniently. We optimize the data to retain only the road sign part of the images, and directly let the model train the image without back- ground pixels to realize the recognition task. This paper constructs two CNN classification models with the same structure through comparative experiments. One model uses the segmented data set for training, and the other model uses the original data set for training. The performance of the two models is evaluated and compared to verify the optimization effect brought by the segmented data set. It is found that segment anything model can accurately cut most of the images in the data set. And the performance of the model trained by the segmented data set is better than that of the model using the original data set. Moreover, the optimized model can achieve high accuracy in less training times.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call