Abstract

Airborne laser scanning point cloud semantic labeling, which aims to identify the category of each point, plays a significant role in many applications, such as forest observing, powerline extraction, etc. Under the guidance of deep learning technology, the interpretation thought of point clouds has also greatly changed. However, owing to the irregular and unordered natures of point clouds, it is relatively difficult for classification model to distinguish some objects with similar geometry by single-modal data only. Fortunately, additional gain information, e.g., color spectrum which can be complementary to geometric information, is able to effectively promote the classification effect. Therefore, the design of fusion strategy is a critical part in model construction. In this article, aiming to capture more abstract semantic information for color spectrum data, we elaborate a color spectrum fusion (CSF) module. It can be flexibly integrated into a classification pipeline with just negligible parameters. Then, we expand data fusion thoughts for point clouds and color spectrum and investigate three possible fusion strategies. Accordingly, we develop three architectures to construct CSF-Nets. Ultimately, by taking a weighted cross entropy loss, we can train our CSF-Nets in an end-to-end manner. Experiments on two extensively used datasets: Vaihingen 3D and LASDU show that the presented three fusion approaches all can improve the performance, while the earlier fusion strategy performs the best. Besides, compared with other well-performed methods, CSF-Net is still able to achieve satisfactory performance on overall accuracy and m <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$F_{1}$</tex-math></inline-formula> -score indicator. This also validates the effectiveness of our multimodal fusion network.

Highlights

  • I N the past few decades, great progress has been achieved in the aspect of sensor technology, making it easier to access remote sensing data, like hyperspectral image [1], [2] and synthetic aperture radar (SAR) image [3], [4]

  • To investigate a more feasible and effective fusion approach and further boost the Airborne Laser Scanning (ALS) point cloud semantic labeling performance, we propose a novel Color Spectrum Fusion Network (CSF-Net)

  • This paper reveals that it is relatively difficult for classification model to distinguish some objects with similar geometry by point cloud modality only

Read more

Summary

INTRODUCTION

I N the past few decades, great progress has been achieved in the aspect of sensor technology, making it easier to access remote sensing data, like hyperspectral image [1], [2] and synthetic aperture radar (SAR) image [3], [4]. It firstly projects 3D point cloud data to 2D space, and establishes the corresponding relationship between projection point cloud data and color spectrum After that, it leverages powerful and effective 2D Convolutional Neural Networks (CNNs) to extract key features of fusion data and output the prediction results. A deep learning based semantic labeling model is carefully constructed to process the fused multimodal data Such method is relatively straightforward and easy to implement, but attaching colors to point clouds in the input layer of the network does not fully exploit the specificity and relationship of heterogeneous data. To investigate a more feasible and effective fusion approach and further boost the ALS point cloud semantic labeling performance, we propose a novel Color Spectrum Fusion Network (CSF-Net).

Projection Fusion Method
Attribute Attachment Fusion Method
METHODOLOGY
Color Spectrum Fusion Module
Fusion Strategy
The Whole CSF-Net Architecture
EXPERIMENTS
Datasets
Section 1
Evaluation Indicator
Training Settings
Experimental Results
DISCUSSION
Findings
Performance Comparison on LASDU
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.