Abstract
In this work, we improve the semantic segmentation of multi-layer top-view grid maps in the context of LiDARbased perception for autonomous vehicles. To achieve this goal, we fuse sequential information from multiple consecutive lidar measurements with respect to the driven trajectory of an autonomous vehicle. By doing so, we enrich the multilayer grid maps which are subsequently used as the input of a neural network. Our approach can be used for LiDAR-only 360° surround view semantic scene segmentation while being suitable for real-time critical systems. We evaluate the benefit of fusing sequential information based on a dense ground truth and discuss the effect on different semantic classes.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have