Abstract

Automated driving and vehicle safety systems need object detection. It is important that object detection be accurate overall and robust to weather and environmental conditions and run in real-time. As a consequence of this approach, they require image processing algorithms to inspect the contents of images. This article compares the accuracy of five major image processing algorithms: Region-based Fully Convolutional Network (R-FCN), Mask Region-based Convolutional Neural Networks (Mask R-CNN), Single Shot Multi-Box Detector (SSD), RetinaNet, and You Only Look Once v4 (YOLOv4). In this comparative analysis, we used a large-scale Berkeley Deep Drive (BDD100K) dataset. Their strengths and limitations are analyzed based on parameters such as accuracy (with/without occlusion and truncation), computation time, precision-recall curve. The comparison is given in this article helpful in understanding the pros and cons of standard deep learning-based algorithms while operating under real-time deployment restrictions. We conclude that the YOLOv4 outperforms accurately in detecting difficult road target objects under complex road scenarios and weather conditions in an identical testing environment.

Highlights

  • The automobile industries have developed rapidly since the first demonstration in the 1980s [1], the vehicle navigation and intelligence system have improved

  • Tive study, the test set is divided into two category levels, i.e., objects without occlusion and truncation and objects with occlusion and truncation

  • The one-stage detection model You Only Look Once v4 (YOLOv4) achieves the highest detection precision the test set is divided into two category levels, i.e., objects without occlusion and truncation and recall for target detection at all levels

Read more

Summary

Introduction

The automobile industries have developed rapidly since the first demonstration in the 1980s [1], the vehicle navigation and intelligence system have improved. The increase in road vehicles raises traffic congestion, road safety, pollution, etc. Autonomous driving is a challenging task; a small error in the system can lead to fatal accidents. Visual data play an essential role in enabling advanced driver-assistance systems in autonomous vehicles [2]. The low cost and wide availability of vision-based sensors offer great potential to detect road incidents. Emerging autonomous vehicles use various sensors and deep learning methods to detect and classify four classes (such as a vehicle, pedestrian, traffic sign, and traffic light) to improve safety by monitoring the current road environment

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call