Abstract

Many production vehicles are now equipped with both cameras and radar in order to provide various driver-assistance systems (DAS) with position information of surrounding objects. These sensors, however, cannot provide position information accurate enough to realize highly automated driving functions and other advanced driver-assistance systems (ADAS). Sensor fusion methods were proposed to overcome these limitations, but they tend to show limited detection performance gain in terms of accuracy and robustness. In this study, we propose a camera-radar sensor fusion framework for robust vehicle localization based on vehicle part (rear corner) detection and localization. The main idea of the proposed method is to reinforce the azimuth angle accuracy of the radar information by detecting and localizing the rear corner part of the target vehicle from an image. This part-based fusion approach enables accurate vehicle localization as well as robust performance with respect to occlusions. For efficient part detection, several candidate points are generated around the initial radar point. Then, a widely adopted deep learning approach is used to detect and localize the left and right corners of target vehicles. The corner detection network outputs their reliability score based on the localization uncertainty of the center point in corner parts. Using these position reliability scores along with a particle filter, the most probable rear corner positions are estimated. Estimated positions (pixel coordinate) are translated into angular data, and the surrounding vehicle is localized with respect to the ego-vehicle by combining the angular data of the rear corner and the radar's range data in the lateral and longitudinal direction. The experimental test results show that the proposed method provides significantly better localization performance in the lateral direction, with greatly reduced maximum errors (radar: 3.02m, proposed method: 0.66m) and root mean squared errors (radar: 0.57m, proposed method: 0.18m).

Highlights

  • A key enabler of recent driver-assistance systems (DAS) technologies is the drastically improved perception technologies owing to vision and radar sensors

  • In order to solve this problem, we developed a relative position estimation method for the surrounding vehicles using sensor fusion based on mono-vision and radar

  • In order to analyze the position estimation results according to the relative position of the surrounding vehicles, the relative positions of the surrounding vehicles in the test cases are summarized by lane, with the lateral position accuracy results depending on the lane, as shown in Fig. 20 and Table 4

Read more

Summary

Introduction

A key enabler of recent DAS technologies is the drastically improved perception technologies owing to vision and radar sensors. As shown, radar is still limited when classifying the vehicle’s corner part, which is needed to accurately localize the vehicle onto bird’s-eye-view coordinates. In this study, we present a sensor fusion method which reinforces the azimuth angle accuracy of the radar data by localizing a vehicle’s rear corner part using a camera.

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call