Abstract

Despite its advantages as an inexpensive, weather-robust and long-range sensor which additionally provides velocity information, radar sensors still lead a shadowy existence compared to lidar and camera when it comes to fulfilling the requirements of fully autonomous driving. In this work, we focus on fully leveraging raw radar tensor data instead of building up on human-biased point clouds which are the typical result of traditional radar signal processing techniques. Utilizing a graph neural network on the raw radar tensor we gain a significant improvement of +10% in average precision over a grid-based convolutional baseline network. The performance of both networks is evaluated on a real world dataset with dense city traffic scenarios, diverse object orientations and distances as well as occlusions up to visually fully occluded objects. Our proposed network increases the maximum range for state-of-the-art full-3D object detection on radar data from previously 20m to 100m.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call