Abstract

An accurate understanding of urban objects is critical for urban modeling, intelligent infrastructure planning and city management. The semantic segmentation of light detection and ranging (LiDAR) point clouds is a fundamental approach for urban scene analysis. Over the last years, several methods have been developed to segment urban furniture with point clouds. However, the traditional processing of large amounts of spatial data has become increasingly costly, both time-wise and financially. Recently, deep learning (DL) techniques have been increasingly used for 3D segmentation tasks. Yet, most of these deep neural networks (DNNs) were conducted on benchmarks. It is, therefore, arguable whether DL approaches can achieve the state-of-the-art performance of 3D point clouds segmentation in real-life scenarios. In this research, we apply an adapted DNN (ARandLA-Net) to directly process large-scale point clouds. In particular, we develop a new paradigm for training and validation, which presents a typical urban scene in central Europe (Munzingen, Freiburg, Baden-Württemberg, Germany). Our dataset consists of nearly 390 million dense points acquired by Mobile Laser Scanning (MLS), which has a rather larger quantity of sample points in comparison to existing datasets and includes meaningful object categories that are particular to applications for smart cities and urban planning. We further assess the DNN on our dataset and investigate a number of key challenges from varying aspects, such as data preparation strategies, the advantage of color information and the unbalanced class distribution in the real world. The final segmentation model achieved a mean Intersection-over-Union (mIoU) score of 54.4% and an overall accuracy score of 83.9%. Our experiments indicated that different data preparation strategies influenced the model performance. Additional RGB information yielded an approximately 4% higher mIoU score. Our results also demonstrate that the use of weighted cross-entropy with inverse square root frequency loss led to better segmentation performance than when other losses were considered.

Highlights

  • IntroductionTo build resilient and sustainable cities, geometrically accurate three-dimensional (3D)

  • As our task was semantic segmentation, we focused on the correctness of the boundaries of each segment instead of the clustering performance

  • To explore the optimal input data configuration for our deep neural networks (DNNs), we evaluated different sampling and partitioning of the raw point clouds

Read more

Summary

Introduction

To build resilient and sustainable cities, geometrically accurate three-dimensional (3D). We need to segment objects, such as buildings, vegetation, roads and other relevant classes, in large 3D scenes for monitoring infrastructure to, for example, control the growth state of the vegetation growing around critical infrastructures, such as overhead power lines and railroad tracks. Light detection and ranging (LiDAR) is a technology with a promising potential to assist in surveying, mapping, monitoring and assessing urban scenes [1]. The MLS has been developed to enable the capture of 3D point clouds of large-scale urban scenarios with higher spatial resolution and more precise data

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call