Abstract

The performance of traditional video-based traffic surveillance systems is susceptible to illumination variation and perspective distortion. This has been a significant motivation in recent years for research into Light Detection and Ranging (LiDAR)-based traffic surveillance systems, as LiDAR is insensitive to both factors. The first step in LiDAR data processing involves effective extraction of moving foreground objects from a referenced background. However, existing methods only detect a static background based on LiDAR point density or relative distance. In this research, we develop a novel dense background representation model (DBRM) for stationary roadside LiDAR sensors to detect both static and dynamic backgrounds, for freeway traffic surveillance purposes. Background objects tend to be stationary in space and time. DBRM utilizes this property to detect two types of background: both static and dynamic. While the static background is represented by fixed structures, the dynamic background – which may be characterized by quasi-static objects such as tree foliage – is modeled by mixtures of Gaussian probability distributions. Experiments were carried out in two different scenarios to compare the proposed model with two other state-of-the-art models. The results demonstrate the effectiveness, robustness, and detail-preserving advantages of the proposed model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call