Abstract

Abstract. Mobile lidar point clouds are commonly used for 3d mapping of road environments as they provide a rich, highly detailed geometric representation of objects on and around the road. However, raw lidar point clouds lack semantic information about the type of objects, which is necessary for various applications. Existing methods for the classification of objects in mobile lidar data, including state of the art deep learning methods, achieve relatively low accuracies, and a primary reason for this under-performance is the inadequacy of available 3d training samples to sufficiently train deep networks. In this paper, we propose a generative model for creating synthetic 3d point segments that can aid in improving the classification performance of mobile lidar point clouds. We train a 3d Adversarial Autoencoder (3dAAE) to generate synthetic point segments that exhibit a high resemblance to and share similar geometric features with real point segments. We evaluate the performance of a PointNet-like classifier trained with and without the synthetic point segments. The evaluation results support our hypothesis that training a classifier with training data augmented with synthetic samples leads to significant improvement in the classification performance. Specifically, our model achieves an F1 score of 0.94 for vehicles and pedestrians and 1.00 for traffic signs.

Highlights

  • Mobile Lidar is the primary technology for capturing detailed 3d spatial data of road environments

  • We propose a semi-supervised approach based on variational autoencoder (VAE) and generative adversarial network (GAN) to generate synthetic point segments from real point segments obtained from a mobile lidar dataset

  • We evaluate the classification performance of a PointNet-like classification network trained with and without synthetic samples to test the effectiveness of synthetic samples for accurate classification of mobile lidar point clouds

Read more

Summary

Introduction

Mobile Lidar is the primary technology for capturing detailed 3d spatial data of road environments. Point clouds captured by mobile lidar systems provide an accurate 3d representations of real-world objects. Mobile lidar point cloud data provide an accurate geometric representation of the real world, they lack semantic information that is necessary for most applications. The application of supervised machine learning to mobile lidar point clouds faces a few critical challenges. Arguably the most critical challenge in the supervised classification of mobile lidar data is the lack of adequate training samples for every object. This is relevant for the state-of-the-art deep learning methods, which require a large number of training samples for training. Techniques that can aid in the generation of training data become vital for improving the classification accuracy of point clouds

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call