Abstract

At the present time, many publicly available point cloud datasets exist, which are mainly focused on autonomous driving. The objective of this study is to develop a new large-scale mobile 3D LiDAR point cloud dataset for outdoor scene semantic segmentation tasks, which has a classification scheme suitable for geospatial applications. Our dataset (Saint Petersburg 3D) contains both real-world (34 million points) and synthetic (34 million points) subsets that were acquired using real and virtual sensors with the same characteristics. An original classification scheme is proposed that contains a set of 10 universal object categories into which any scene represented by dense outdoor mobile LiDAR point clouds can be divided. The evaluation procedure for semantic segmentation of point clouds for geospatial applications is described. An experiment with the Kernel Point Fully Convolution Neural Network model trained on the proposed dataset was carried out. We obtained an overall 92.56% mIoU, which demonstrates the high efficiency of using deep learning models for point cloud semantic segmentation for geospatial applications in accordance with the proposed classification scheme.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.