Abstract

The performance of traditional video-based traffic surveillance systems is susceptible to illumination variation and perspective distortion. This has been a significant motivation in recent years for research into Light Detection and Ranging (LiDAR)-based traffic surveillance systems, as LiDAR is insensitive to both factors. The first step in LiDAR data processing involves effective extraction of moving foreground objects from a referenced background. However, existing methods only detect a static background based on LiDAR point density or relative distance. In this research, we develop a novel dense background representation model (DBRM) for stationary roadside LiDAR sensors to detect both static and dynamic backgrounds, for freeway traffic surveillance purposes. Background objects tend to be stationary in space and time. DBRM utilizes this property to detect two types of background: both static and dynamic. While the static background is represented by fixed structures, the dynamic background – which may be characterized by quasi-static objects such as tree foliage – is modeled by mixtures of Gaussian probability distributions. Experiments were carried out in two different scenarios to compare the proposed model with two other state-of-the-art models. The results demonstrate the effectiveness, robustness, and detail-preserving advantages of the proposed model.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.