Abstract

LiDAR sensors are almost indispensable for autonomous robots to perceive the surrounding environment. However, the transmission of large-scale LiDAR point clouds is highly bandwidth-intensive, which can easily lead to transmission problems, especially for unstable communication networks. Meanwhile, existing LiDAR data compression is mainly based on rate-distortion optimization, which ignores the semantic information of ordered point clouds and the task requirements of autonomous robots. To address these challenges, this article presents a task-driven <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">S</u> cene- <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">A</u> ware <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">L</u> iDAR <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">P</u> oint <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C</u> louds <underline xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">C</u> oding (SA-LPCC) framework for autonomous vehicles. Specifically, a semantic segmentation model is developed based on multi-dimension information, in which both 2D texture and 3D topology information are fully utilized to segment movable objects. Further, a prediction-based deep network is explored to remove the spatial-temporal redundancy. The experimental results on the benchmark semantic KITTI dataset validate that our SA-LPCC achieves state-of-the-art performance in terms of the reconstruction quality and storage space for downstream tasks. We believe that SA-LPCC jointly considers the scene-aware characteristics of movable objects and removes the spatial-temporal redundancy from an end-to-end learning mechanism, which will boost the related applications from algorithm optimization to industrial products.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.