Abstract

Monitoring vehicular road traffic is a key component of any autonomous driving platform. Detecting moving objects, and tracking them, is crucial to navigating around objects and predicting their locations and trajectories. Laser sensors provide an excellent observation of the area around vehicles, but the point cloud of objects may be noisy, occluded, and prone to different errors. Consequently, object tracking is an open problem, especially for low-quality point clouds. This paper describes a pipeline to integrate various sensor data and prior information, such as a Geospatial Information System (GIS) map, to segment and track moving objects in a scene. We show that even a low-quality GIS map, such as OpenStreetMap (OSM), can improve the tracking accuracy, as well as decrease processing time. A bank of Kalman filters is used to track moving objects in a scene. In addition, we apply non-holonomic constraint to provide a better orientation estimation of moving objects. The results show that moving objects can be correctly detected, and accurately tracked, over time, based on modest quality Light Detection And Ranging (LiDAR) data, a coarse GIS map, and a fairly accurate Global Positioning System (GPS) and Inertial Measurement Unit (IMU) navigation solution.

Highlights

  • Developing sensor technologies provide increasingly rich data that can be processed by improving data fusion techniques, resulting in highly accurate multiple sensor integration that can effectively support high-level computer vision and robotics tasks, such as autonomous driving and scene understanding

  • This paper focuses on point cloud processing since image-processing algorithms are generally less reliable for autonomous vehicles

  • The reference frame for moving object tracking is defined based on the Global Positioning System (GPS)/Inertial Measurement Unit (IMU) navigation solution and calibrated lever-arms and boresights were used to obtain the imaging sensors’ pose

Read more

Summary

Introduction

Developing sensor technologies provide increasingly rich data that can be processed by improving data fusion techniques, resulting in highly accurate multiple sensor integration that can effectively support high-level computer vision and robotics tasks, such as autonomous driving and scene understanding. Road traffic monitoring is a key component of autonomous driving and can be divided into subcategories, such as object segmentation, object tracking, and object recognition. The progress in image and point cloud processing algorithms has resulted in stronger object tracking approaches. Image and point cloud are prone to noise, clutter, and occlusion, and, tracking still remains a challenging task in autonomous driving. This paper focuses on point cloud processing since image-processing algorithms are generally less reliable for autonomous vehicles

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call