Abstract

An essential component in the race towards the self-driving car is automatic traffic sign recognition. The capability to automatically recognize road signs allow self-driving cars to make prompt decisions such as adhering to speed limits, stopping at traffic junctions and so forth. Traditionally, feature-based computer vision techniques were employed to recognize traffic signs. However, recent advancements in deep learning techniques have shown to outperform traditional color and shape based detection methods. Deep convolutional neural network (DCNN) is a class of deep learning method that is most commonly applied to vision-related tasks such as traffic sign recognition. For DCNN to work well, it is imperative that the algorithm is given a vast amount of training data. However, due to the scarcity of a curated dataset of the Malaysian traffic signs, training DCNN to perform well can be very challenging. In this demonstrate that DCNN can be trained with little training data with excellent accuracy by using transfer learning. We retrain various pre-trained DCNN from other image recognition tasks by fine-tuning only the top layers on our dataset. Experiment results confirm that by using as little as 100 image samples for 5 different classes, we are able to classify hitherto traffic signs with above 90% accuracy for most pre-trained models and 98.33% for the DenseNet169 pre-trained model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call