Abstract

This study is part of an investigation on application of LiDAR sensors to roadside to create a human-in-the-loop system that actively protects all road users, especially pedestrians and cyclists. The primary challenge for implementing such a system in the field is its computational efficiency. Unlike on-board LiDAR sensors used in autonomous vehicles, roadside applications must perform complete background filtering and monitor entire real-time traffic movements. This paper presents an innovative method for fast filtering background objects from roadside LiDAR data and demonstrates its capabilities to achieve higher accuracy, faster processing speed, and require less data storage. The proposed method is innovative in that it embeds background filtering within the decoding process to exclude irrelevant information, thus saving computational expense on analyzing background points that are not of interest. A 2D channel-azimuth background table is generated by learning the critical distance information of both static and dynamic backgrounds, which is more accurate than traditional background models that only focus on static backgrounds. Performance measurements and comparison analysis were conducted using data from our testbeds and open source. Compared with the state-of-the-art methods, the new method achieves faster processing speed (0.65 ms/frame and 0.90 ms/frame) and requires less data storage (225 KB and 450 KB memory) when processing 16- and 32-laser LiDAR data collected at 10 Hz.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call