Abstract

Abstract. Recent works have attempted to extract features such as road markings from sparse mobile LiDAR scanning point cloud-derived images via convolutional neural networks (CNN). In this paper, the use of such methods for ground segmentation was explored. To begin, point clouds from each channel will be projected onto the y-z plane to generate the images that will be used for training and testing the CNN model. Then, for the main workflow, the following steps were performed for each channel: (1) point cloud-to-image conversion; (2) CNN classification; and (3) image-to-point cloud projection. Then utilizing multi-threading, each channel is processed in parallel to generate our ground-segmented sparse point cloud. Our findings have shown successful ground segmentation, achieving an f1-score of 98.9%. However, it performed 27.81% slower as compared to RANSAC. Overall, this initial investigation has demonstrated that ground segmentation from sparse point cloud-derived imagery is possible, and with further improvements to the CNN model, to make it faster, it has good potential to act as an alternative to conventional point cloud processing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call