Abstract

This paper develops a non-model based vehicle tracking methodology for extracting road user trajectories as they pass through the field of view of a 3D LiDAR sensor mounted on the side of the road. To minimize the errors, our work breaks from conventional practice and postpones target segmentation until after collecting LiDAR returns over many scans. Specifically, our method excludes all non-vehicle returns in each scan and retains the ungrouped vehicle returns. These vehicle returns are stored over time in a spatiotemporal stack (ST stack) and we develop a vehicle motion estimation framework to cluster the returns from the ST stack into distinct vehicles and extract their trajectories. This processing includes removing the impacts of the target's changing orientation relative to the LiDAR sensor while separately taking care to preserve the crisp transition to/from a stop that would normally be washed out by conventional data smoothing or filtering. This proof of concept study develops the methodology using a single LiDAR sensor, thus, limiting the surveillance region to the effective range of the given sensor. It should be clear from the presentation that, provided sufficient georeferencing, the surveillance region can be extended indefinitely by deploying multiple LiDAR sensors with overlapping fields of view.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call