Abstract

Accurate and robust object detection is imperative to the implementation of autonomous driving. In real-world scenarios, the effectiveness of image-based detectors is limited by low visibility or harsh circumstances. Owing to the immunity to environmental variability, millimeter-wave (mmWave) radar sensors are complementary to camera sensors, opening up the possibility of radar-camera fusion to improve object detection performance. In this paper, we construct a Radar-Enhanced image Fusion Network (REFNet) for 2D object detection in autonomous driving. Specifically, the radar data is projected onto the camera image plane to unify the data format of heterogeneous sensing modalities. To overcome the sparsity of radar point clouds, we devise an Uncertainty Radar Block (URB) to increase the density of radar points considering the azimuth uncertainty of radar measurements. Additionally, we design an adaptive network architecture which supports multi-level fusion and has the ability to determine the optimal fusion level. Moreover, we incorporate a robust attention module within the fusion network to exploit the synergy of radar and camera information. Evaluated with the canonical nuScenes dataset, our proposed method consistently and significantly outperforms the image-only version under all scenarios, especially in nightly and rainy conditions.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.