Abstract
Deep learning methods based on convolutional neural networks have shown to give excellent results in semantic segmentation of images, but the inherent irregularity of point cloud data complicates their usage in semantically segmenting 3D laser scanning data. To overcome this problem, point cloud networks particularly specialized for the purpose have been implemented since 2017 but finding the most appropriate way to semantically segment point clouds is still an open research question. In this study we attempted semantic segmentation of point cloud data with convolutional neural networks by using only the raw measurements provided by a multiple echo detection capable profiling laser scanner. We formatted the measurements to a series of 2D rasters, where each raster contains the measurements (range, reflectance, echo deviation) of a single scanner mirror rotation to be able to use the rich research done on semantic segmentation of 2D images with convolutional neural networks. Similar approach for profiling laser scanner in forest context has never been proposed before. A boreal forest in Evo region near Hämeenlinna in Finland was used as experimental study area. The data was collected with FGI Akhka-R3 backpack laser scanning system, georeferenced and then manually labelled to ground, understorey, tree trunk and foliage classes for training and evaluation purposes. The labelled points were then transformed back to 2D rasters and used for training three different neural network architectures. Further, the same georeferenced data in point cloud format was used for training the state-of-the-art point cloud semantic segmentation network RandLA-Net and the results were compared with those of our method. Our best semantic segmentation network reached the mean Intersection-over-Union value of 80.1% and it is comparable to the 80.6% reached by the point cloud -based RandLA-Net. The numerical results and visual analysis of the resulting point clouds show that our method is a valid way of doing semantic segmentation of point clouds at least in the forest context. The labelled datasets were also released to the research community.
Highlights
Laser scanning is a measurement technique to determine shape, and possibly the appearance, of real-world objects and environments in the form of a point cloud
Deep learning methods based on convolutional neural networks have shown to give excellent results in semantic segmentation of images, but the inherent irregularity of point cloud data complicates their usage in semantically segmenting 3D laser scanning data
In this research we investigated whether non-georeferenced, raw mobile/kinematic laser scanner measurements in forest context contain enough information to classify the points for the purpose of modeling and analyzing forest structures
Summary
Laser scanning is a measurement technique to determine shape, and possibly the appearance, of real-world objects and environments in the form of a point cloud. MLS point coulds can be collected with multiple techniques, for example using hand-held, backpack, and mini-unmanned aerial vehicle (UAV) laser scanning. Processing and getting useful information from large point clouds manually is time consuming and automatic methods are required. Semantic segmentation of the data to useful classes is an important step in utilizing 3D data as it enables users to concentrate on parts of the point clouds they are interested in. Common convolutional architectures require highly regular input data formats, such as 2D rasters or 3D voxels, to carry out, e.g., weight sharing and other kernel optimizations, but many approaches to utilize them with irregular point cloud data have been explored Guo et al (2020)
Full Text
Topics from this Paper
Point Cloud Semantic Segmentation
Semantic Segmentation
Segmentation Of Point Cloud Data
Point Cloud
Convolutional Neural Networks
+ Show 5 more
Create a personalized feed of these topics
Get StartedTalk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Similar Papers
Remote Sensing
Apr 30, 2023
International Journal of Applied Earth Observation and Geoinformation
Sep 1, 2022
Remote Sensing
Dec 31, 2022
Journal on Computing and Cultural Heritage
Dec 3, 2020
Proceedings of the AAAI Conference on Artificial Intelligence
May 18, 2021
Remote Sensing
Jan 18, 2023
AGILE: GIScience Series
Jun 10, 2022
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Jan 1, 2023
Remote Sensing
Oct 14, 2022
Frontiers in plant science
Jun 12, 2023
Automation in Construction
Jan 1, 2022
ISPRS Journal of Photogrammetry and Remote Sensing
May 1, 2021
Applied Sciences
Aug 10, 2023
IEEE Access
Jan 1, 2023
Complex & Intelligent Systems
Jan 4, 2022
ISPRS Open Journal of Photogrammetry and Remote Sensing
ISPRS Open Journal of Photogrammetry and Remote Sensing
Nov 1, 2023
ISPRS Open Journal of Photogrammetry and Remote Sensing
Nov 1, 2023
ISPRS Open Journal of Photogrammetry and Remote Sensing
Aug 1, 2023
ISPRS Open Journal of Photogrammetry and Remote Sensing
Aug 1, 2023
ISPRS Open Journal of Photogrammetry and Remote Sensing
Aug 1, 2023
ISPRS Open Journal of Photogrammetry and Remote Sensing
Aug 1, 2023
ISPRS Open Journal of Photogrammetry and Remote Sensing
Aug 1, 2023
ISPRS Open Journal of Photogrammetry and Remote Sensing
Aug 1, 2023
ISPRS Open Journal of Photogrammetry and Remote Sensing
Aug 1, 2023
ISPRS Open Journal of Photogrammetry and Remote Sensing
Aug 1, 2023