Abstract

In autonomous driving, using a variety of sensors to recognize preceding vehicles at middle and long distances is helpful for improving driving performance and developing various functions. However, if only LiDAR or cameras are used in the recognition stage, it is difficult to obtain the necessary data due to the limitations of each sensor. In this paper, we proposed a method of converting the vision-tracked data into bird’s eye-view (BEV) coordinates using an equation that projects LiDAR points onto an image and a method of fusion between LiDAR and vision-tracked data. Thus, the proposed method was effective through the results of detecting the closest in-path vehicle (CIPV) in various situations. In addition, even when experimenting with the EuroNCAP autonomous emergency braking (AEB) test protocol using the result of fusion, AEB performance was improved through improved cognitive performance than when using only LiDAR. In the experimental results, the performance of the proposed method was proven through actual vehicle tests in various scenarios. Consequently, it was convincing that the proposed sensor fusion method significantly improved the adaptive cruise control (ACC) function in autonomous maneuvering. We expect that this improvement in perception performance will contribute to improving the overall stability of ACC.

Highlights

  • In autonomous vehicles, collision assistance and avoidance systems for preceding vehicles are very important systems, and many researchers have been conducting much research related to these topics

  • We proposed a new method to increase the accuracy of the detection of the closest in-path vehicle (CIPV) at a middle distance by the method of the sensor fusion of the object tracking results of the low-channel LiDAR and the object tracking results of the vision sensor

  • We proposed a sensor fusion method that utilized LiDAR and visiontracked data and improved the detection of the CIPV by transforming the pixel image coordinates to the bird’s eye-view (BEV)

Read more

Summary

Introduction

Collision assistance and avoidance systems for preceding vehicles are very important systems, and many researchers have been conducting much research related to these topics. We verified the performance by applying the processed data to adaptive cruise control (ACC) using the newly proposed sensor fusion method. In order to perform a fusion of LiDAR and camera tracking data, a checker board was used to obtain the extrinsic and intrinsic parameters of the camera, and we used these to project the data; this is a cumbersome task that depends on the size, location, and sensor location of the checker board. We proposed a new sensor fusion method utilizing the object tracking results obtained from LiDAR and camera sensors. It informs whether the recognized object is the CIPV of the ego vehicle so that the fusion result can be applied to ACC.

Sensors’ Description
Proving Ground for the Experiments
Point Cloud Segmentation and Tracking
Distance Accuracy from Tracked Data
Object Detection
Distance Estimation with Regression
Object Tracking
Object 3D Coordinate Estimation
Transforming the Image Pixel to the BEV
Fusion of LiDAR and VISION
Fusion of the Camera and LiDAR Tracking Data with the IoU
Result of Fusion Data
Implementation of ACC
AEB Test
Qualitative Evaluation
Scenario
The CIPV Result of the Scenario
The Result of the Scenario
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.