Abstract

Vehicle detection is one of the most important environment perception tasks for autonomous vehicles. The traditional vision-based vehicle detection methods are not accurate enough especially for small and occluded targets, while the light detection and ranging- (lidar-) based methods are good in detecting obstacles but they are time-consuming and have a low classification rate for different target types. Focusing on these shortcomings to make the full use of the advantages of the depth information of lidar and the obstacle classification ability of vision, this work proposes a real-time vehicle detection algorithm which fuses vision and lidar point cloud information. Firstly, the obstacles are detected by the grid projection method using the lidar point cloud information. Then, the obstacles are mapped to the image to get several separated regions of interest (ROIs). After that, the ROIs are expanded based on the dynamic threshold and merged to generate the final ROI. Finally, a deep learning method named You Only Look Once (YOLO) is applied on the ROI to detect vehicles. The experimental results on the KITTI dataset demonstrate that the proposed algorithm has high detection accuracy and good real-time performance. Compared with the detection method based only on the YOLO deep learning, the mean average precision (mAP) is increased by 17%.

Highlights

  • The core technology of the unmanned driving includes the environmental perception, precise positioning, and path planning

  • The precision and recall chart of the You Only Look Once (YOLO) v3 algorithm and the proposed algorithm for vehicle detection on the KITTI test set are presented in Figures 11(a) and 11(b), respectively

  • We tested it on our experimental platform with 1000 frames of data which randomly selected from KITTI dataset, and the average processing time per frame of the proposed algorithm was about 0.09 s, while of the YOLO v3 algorithm was 0.05 s

Read more

Summary

Introduction

The core technology of the unmanned driving includes the environmental perception, precise positioning, and path planning. In [14], the obstacles in the image and point cloud were processed separately to identify and classify the pedestrians. In [15], targets were extracted from the image obtained by the camera, and vehicles were detected, and hypotheses were generated by the Haar-like features and AdaBoost, and after that, the hypotheses were confirmed by lidar. In [16], the authors used lidar to determine the region of interest and the SVM to classify the target in the image, but the proposed method could detect only the rear features of a vehicle, and the detection rate was low. The main idea of this paper is the same as that of [16, 17], which is to use lidar to extract the region of interest, and to use the YOLO, which has high detection accuracy, to detect the objects in the ROI of the images.

Region of Interest Extraction
Obstacle Classification by YOLO
Results
Discussion
Summary and Outlook
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.