Abstract

We present a real-time Truncated Signed Distance Field (TSDF)-based three-dimensional (3D) semantic reconstruction for LiDAR point cloud, which achieves incremental surface reconstruction and highly accurate semantic segmentation. The high-precise 3D semantic reconstruction in real time on LiDAR data is important but challenging. Lighting Detection and Ranging (LiDAR) data with high accuracy is massive for 3D reconstruction. We so propose a line-of-sight algorithm to update implicit surface incrementally. Meanwhile, in order to use more semantic information effectively, an online attention-based spatial and temporal feature fusion method is proposed, which is well integrated into the reconstruction system. We implement parallel computation in the reconstruction and semantic fusion process, which achieves real-time performance. We demonstrate our approach on the CARLA dataset, Apollo dataset, and our dataset. When compared with the state-of-art mapping methods, our method has a great advantage in terms of both quality and speed, which meets the needs of robotic mapping and navigation.

Highlights

  • When entering unfamiliar environment, it is very important to perceive the 3D structure and semantic information in real time

  • We verify the effectiveness of our method for both 3D semantic segmentation and reconstruction in three datasets

  • Our approach obtains good results on large objects, such as roads and buildings and has the ability to accurately recover small object such as ostreet lights, boxes and so on. These results prove that the line-of-sight algorithm is very effective on Lighting Detection and Ranging (LiDAR) point cloud

Read more

Summary

Introduction

It is very important to perceive the 3D structure and semantic information in real time. Reconstructing precise and continuous surface in real time allows for robots to respond accurately and fast. At the same time, fusing semantic information into the. Researchers proposed many methods to achieve precise surface reconstruction for LiDAR point cloud. Verma et al [1], Zhou et al [2] and Poullis et al [3] created 3D scenes from LiDAR data. In these methods, noise was removed by classification. Individual building patches and ground points were separated by segmentation, and mesh models were generated from building patches

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call