Abstract

3D point cloud maps are critical for self-driving cars and robots to perform navigation and localization in multiple changing environments. However, since lidar scans used for map construction may contain dynamic objects which leave unwanted traces in the maps. These traces will inevitably deteriorate the quality of maps and affect both localization and navigation accuracy. State-of-the-art works suffer from the ambiguity caused by noisy motion or incidence angle to some extent. To tackle this problem, we propose a robust method to keep only static points and remove dynamic ones in map by comparing the discrepancies between single scan against registered noisy map. This method takes advantage of the fact that light travels in a straight line and can not penetrate most objects. Accordingly, we present a two-stage projection distance-based strategy to discriminate static/dynamic points. Furthermore, we also propose a novel way of generating artificial background endpoints to overcome the issue when there are not sufficient static points observed behind the dynamic points. Experimental evaluations are conducted on the SemanticKITTI dataset to indicate that the our method is robust and reliable against other methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call