Abstract

Objects detection and tracking using 3D LiDAR has gained momentum lately while it has not been extensively applied, and the main challenge is that conventional mechanical LiDAR is expensive and it is difficult for single sensor to obtain good tracking performance over a long period of time. In this paper, we propose a multi-sensor fusion 3D objects detection and tracking framework using solid-state LiDAR and RGB camera. We use a low-cost solid-state LiDAR with irregular scan pattern and propose a universal clustering method which determines the searching radius by laser density and range. To improve the overall tracking performance, range and visual information are both utilized to associate 3D objects across different frames. We avoid introducing odometry errors into the system and use visual difference rather than estimated trajectory to distinguish between closely located objects. And we have designed a re-detection process to locate objects that were missed by the clustering algorithm. We evaluated the proposed method on five sequences captured by our ego-vehicle and three additional sequences from open-source datasets. Our results demonstrate that the adaptive searching radius enhances recall and the overall tracking performance is improved by fusing LiDAR and camera.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call