Abstract

AbstractIntelligent Transportation System (ITS), including unmanned vehicles, has been gradually matured despite on road. How to eliminate the interference due to various environmental factors, carry out accurate and efficient traffic sign detection and recognition, is a key technical problem. However, traditional visual object recognition mainly relies on visual feature extraction, e.g., color and edge, which has limitations. Convolutional neural network (CNN) was designed for visual object recognition based on deep learning, which has successfully overcome the shortcomings of conventional object recognition. In this paper, we implement an experiment to evaluate the performance of the latest version of YOLOv5 based on our dataset for Traffic Sign Recognition (TSR), which unfolds how the model for visual object recognition in deep learning is suitable for TSR through a comprehensive comparison with SSD (i.e., single shot multibox detector) as the objective of this paper. The experiments in this project utilize our own dataset. Pertaining to the experimental results, YOLOv5 achieves 97.70% in terms of mAP@0.5 for all classes, SSD obtains 90.14% mAP in the same term. Meanwhile, regarding recognition speed, YOLOv5 also outperforms SSD.

Highlights

  • In recent years, with the outbreak of Artificial Intelligence (AI), the vehicle-aided driving system has updated previous driving mode

  • This project aims to probe the accuracy and speed of Traffic Sign Recognition (TSR) based on the dataset of our traffic signs

  • In this paper, we selected the latest version of the series of You Only Look Once (YOLO) algorithms, namely YOLOv5, to evaluate its performance

Read more

Summary

Introduction

With the outbreak of Artificial Intelligence (AI), the vehicle-aided driving system has updated previous driving mode. By acquiring real-time road condition information, the system promptly reminds drivers to make accurate operations, thereby prevent car accidents due to driver fatigue. In addition to the auxiliary driving systems, development of autonomous vehicles requires rapid and accurate detection of traffic signs from digital images. Traffic Sign Recognition (TSR) is to detect the location of traffic signs from digital images or video frames, given a specific classification [25]. The TSR methods basically make use of visual information such as shape and color of traffic signs. The conventional TSR algorithms are facing drawbacks in real-time tests, such as being restricted by driving conditions, including lighting, camera angle, obstruction, driving speed, and so on. It’s very difficult to achieve multitarget detection, easy to miss visual objects because of slow recognition [6]

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call