Abstract
Within the autonomous driving community, millimeter-wave frequency-modulated continuous-wave (FMCW) radars are not used to their fullest potential. Classical, hand-designed target detection algorithms are applied in the signal processing chain and the rich contextual information is discarded. This early discarding of information limits what can be applied in algorithms further downstream. In contrast with object detection in camera images, radar has thus been unable to benefit fully from data-driven methods. This work seeks to bridge this gap by providing the community with a diverse, minimally processed FMCW radar dataset that is not only RGB-D (color and depth) aligned but also synchronized with inertial measurement unit (IMU) measurements in the presence of ego-motion. Moreover, having time-synchronized measurements allow for verification, automated or assisted labelling of the radar data, and opens the door for novel methods of fusing the data from a variety of sensors. We present a system that could be built with accessible, off-the-shelf components within a $1000 budget and an accompanying dataset consisting of diverse scenes spanning indoor, urban and highway driving. Finally, we demonstrated the ability to go beyond classical radar object detection with our dataset with a classification accuracy of 85.1% using the low-level radar signals captured by our system, supporting our argument that there is value in retaining the information discarded by current radar pipelines.
Highlights
I N comparison to visible light and the lasers used by lidar systems, millimeter-wave frequency-modulated continuous-wave (FMCW) radars use wavelengths that are much larger than fog, dust, and other particles present in adverse driving conditions that limit visibility
We call this sequence of N chirps a frame, commonly referred to as the coherent processing interval (CPI), and this is the basic unit of FMCW radar signal just as an image is the basic unit of a camera
We demonstrated baseline results and presented scenarios where modern advances in deep learning could help in getting richer object detection from automotive FMCW radars
Summary
I N comparison to visible light and the lasers used by lidar systems, millimeter-wave (mmWave) FMCW radars use wavelengths that are much larger than fog, dust, and other particles present in adverse driving conditions that limit visibility. While recent published works in autonomous driving attempt to incorporate radars, the input from the radar consists only of points with velocity, retaining little information from the raw measurements [5], [6], [7], [8]. In these sources, we see methods to increase the number of points such as integrating over time and using inputs from multiple sensors. Semantic object detection, in contrast with classical radar object detection, and micro-Doppler exploitation assisted by RGB-D pose estimation
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Journal of Selected Topics in Signal Processing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.