Abstract

Object detection is a crucial task in autonomous driving. Currently, object-detection methods for autonomous driving systems are primarily based on information from cameras and lidar, which may experience interference from complex lighting or poor weather. At present, the four-dimensional (4D) (x, y, z, v) millimeter-wave radar can provide a denser point cloud to achieve 3D object-detection tasks that are difficult to complete with traditional millimeter-wave radar. Existing 3D object point-cloud detection algorithms are mostly based on 3D lidar, these methods are not necessarily applicable to millimeter-wave radars, which have sparser data and more noise and include velocity information. This study proposes a 3D object-detection framework based on a multi-frame 4D millimeter-wave radar point cloud. First, the ego vehicle velocity information is estimated by the millimeter-wave radar, and the relative velocity information of the millimeter-wave radar point cloud is compensated to the absolute velocity. Second, by matching between millimeter-wave radar frames, the multi-frame millimeter-wave radar point cloud is matched to the last frame. Finally, the object is detected by the proposed multi-frame millimeter-wave radar point cloud detection network. Experiments are performed using our newly recorded TJ4DRadSet dataset in a complex traffic environment. The results showed that the proposed object-detection framework outperformed the comparison methods based on the 3D mean average precision. The experimental results and methods can be used as the baseline for other multi-frame 4D millimeter-wave radar detection algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call