Abstract
Abstract. Automation of 3D LiDAR point cloud processing is expected to increase the production rate of many applications including automatic map generation. Fast development on high-end hardware has boosted the expansion of deep learning research for 3D classification and segmentation. However, deep learning requires large amount of high quality training samples. The generation of training samples for accurate classification results, especially for airborne point cloud data, is still problematic. Moreover, which customized features should be used best for segmenting airborne point cloud data is still unclear. This paper proposes semi-automatic point cloud labelling and examines the potential of combining different tailor-made features for pointwise semantic segmentation of an airborne point cloud. We implement a Dynamic Graph CNN (DGCNN) approach to classify airborne point cloud data into four land cover classes: bare-land, trees, buildings and roads. The DGCNN architecture is chosen as this network relates two approaches, PointNet and graph CNNs, to exploit the geometric relationships between points. For experiments, we train an airborne point cloud and co-aligned orthophoto of the Surabaya city area of Indonesia to DGCNN using three different tailor-made feature combinations: points with RGB (Red, Green, Blue) color, points with original LiDAR features (Intensity, Return number, Number of returns) so-called IRN, and points with two spectral colors and Intensity (Red, Green, Intensity) so-called RGI. The overall accuracy of the testing area indicates that using RGB information gives the best segmentation results of 81.05% while IRN and RGI gives accuracy values of 76.13%, and 79.81%, respectively.
Highlights
Up to now, automatic object classification of large-scale point cloud data is still challenging due to high variations in object shape, size, color, and texture
Airborne point clouds and aerial photos have been used as main input data for various 3D mapping activities, as both provide high-resolution earth surface data
The results show that applying data fusion at the observation level can improve overall accuracy from 65% to 79%
Summary
Automatic object classification of large-scale point cloud data is still challenging due to high variations in object shape, size, color, and texture. PointNet, proposed by Qi et al, (2016), pioneered pointwise deep learning approaches for point cloud classification and segmentation. This high computationally effective and efficient network still suffers from a lack of capability to make use of local information on the point sets [Jiang and Ma, 2019]. Wicaksono et al (2019) used a similar architecture to this study, DGCNN, to classify building and non-building points using two different feature combinations: with color and without color features Based on their results, they stated that color features do not improve results but even affects the semantic segmentation results. Xiu et al, (2019) implemented PointNet using Intensity (depth) and
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.