Abstract

Environment perception is a critical part of autonomous driving which is required to get a reliable and accurate object information from environment. LIDAR sensors are thought to be a key enabler for autonomous cars through their significant advantages on wide field-of-view and high-resolution capabilities. Automotive companies’ interest in LIDAR sensors is also thought to increase with slashed sensor prices over the years. Our main aim in this research is to get more precise object detection and tracking (ODT) system in real time for autonomous vehicles. In this paper, we have developed, applied and tested two different (low and high) realtime sensor fusion methods on multiple 3D LIDAR sensors for environment perception. The first contribution of this work is proposing and implementing “high level track-to-track fusion” method on multiple 3D LIDAR sensors. To the best of our knowledge, this is the first automotive application of track-to-track fusion method on multiple 3D LIDARs. Another contribution is the analysis and comparison of track-to-track fusion method performance with the well-studied low-level real-time fusion method. These two real-time fusion strategies are implemented in the experimental test truck which is instrumented with two 3D LIDAR sensors and the performance of the fusion strategies are tested under three different driving scenarios. Additionally, the ground truth data is collected with the help of global navigation satellite system (GNSS) in high accuracy for performance evaluation. The test results are analyzed in terms of defined performance criteria and the benefits & weaknesses of the proposed approach are discussed in this work.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call