Abstract

The object detection and recognition algorithm based on the fusion of millimeter-wave radar and high-definition video data can improve the safety of intelligent-driving vehicles effectively. However, due to the different data modalities of millimeter-wave radar and video, how to fuse the two effectively is the key point. The difficulty lies in the data fusion methods such as insufficient adaptability of image distortion in data alignment and coordinate transformation and also the mismatching of information levels of the data to be fused. To solve the problem of data fusion of millimeter wave radar and video, this paper proposes a decision-level fusion method of millimeter-wave radar and high-definition video data based on angular alignment. Specifically, through the joint calibration and approximate interpolation, projected to polar coordinate system, the radar and the camera are angularly aligned in the horizontal direction. Then objects are detected by a deep neural network model from video data, and combined with those detected by radar to make the joint decision. Finally, object detection and recognition task based on the fusion of the two kinds of data is completed. Theoretical analysis and experimental results indicate that the accuracy of the algorithm based on the two data fusion is superior to that of the single detection and recognition algorithm on the basis of millimeter-wave radar or video data.

Highlights

  • Automotive driving assistance can significantly facilitate the safety of driving and avoid traffic accidents

  • MULTI-SENSOR FUSION-BASED OBJECT DETECTION AND RECOGNITION In this paper, we propose an object detection and recognition method that fuses the information of MMW Radar and video data

  • The urban highway is set as the scenario source of test data, and 455 scenes of different time and place are extracted from the real video to evaluate the performance of object detection and recognition based on multi-sensor fusion method

Read more

Summary

INTRODUCTION

Automotive driving assistance can significantly facilitate the safety of driving and avoid traffic accidents. Most of the existing methods achieve data alignment between MMW Radar and camera images by projecting to real world coordinates. To solve the problem that the heterogeneous data of MMW Radar and camera is difficult to fuse, this paper presents a fusion algorithm based on MMW Radar and high-definition video for object detection of intelligent driving assistance. Through multi-angle joint calibration, the spatial sparse alignment of the heterogeneous data of MMW Radar and the camera in the common dimension is realized with image distortion ignored. A neighboring approximate interpolation method is proposed to achieve the spatial alignment of the heterogeneous data of MMW Radar and camera in the common dimension. While multi-camera fusion improving the recall rate of object detection task, the proposed method of decision-level fusion of MMW Radar and camera data removes false positives in the object set and improve the accuracy

RELATED WORK
JOINT CALIBRATION ALGORITHM OF IMAGE AND RADAR DATA
EXPERIMENTS AND ANALYSIS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call