Abstract

Processing the massive LiDAR point cloud is a time consuming process due to the magnitude of the data involved and the highly computational iterative nature of the algorithms. In particular, many current and future applications of LiDAR require real- or near-real-time processing capabilities. Relevant examples include environmental studies, military applications, tracking and monitoring of hazards. Recent advances in Graphics Processing Units (GPUs) open a new era of General-Purpose Processing on Graphics Processing Units (GPGPU). In this paper, we seek to harness the computing power available on contemporary Graphic Processing Units (GPUs), to accelerate the processing of massive LiDAR point cloud. We propose a CUDA-based method capable of accelerating processing of massive LiDAR point cloud on the CUDA-enabled GPU. Our experimental results showed that we are able to significantly reduce processing time of constructing TIN from LiDAR point cloud with GPGPU based parallel processing implementation, in comparison with the current state-of-the-art CPU-based algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call