Abstract

Processing the massive LiDAR point cloud is a time consuming process due to the magnitude of the data involved and the highly computational iterative nature of the algorithms. In particular, many current and future applications of LiDAR require real- or near-real-time processing capabilities. Relevant examples include environmental studies, military applications, tracking and monitoring of hazards. Recent advances in Graphics Processing Units (GPUs) open a new era of General-Purpose Processing on Graphics Processing Units (GPGPU). In this paper, we seek to harness the computing power available on contemporary Graphic Processing Units (GPUs), to accelerate the processing of massive LiDAR point cloud. We propose a CUDA-based method capable of accelerating processing of massive LiDAR point cloud on the CUDA-enabled GPU. Our experimental results showed that we are able to significantly reduce processing time of constructing TIN from LiDAR point cloud with GPGPU based parallel processing implementation, in comparison with the current state-of-the-art CPU-based algorithms.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.