Abstract

Airborne LiDAR point cloud classification has been a long-standing problem in photogrammetry and remote sensing. Early efforts either combine hand-crafted feature engineering with machine learning-based classification models or leverage the power of conventional convolutional neural networks (CNNs) on projected feature images. Recent proposed deep learning-based methods tend to develop new convolution operators which can be directly applied on raw point clouds for representative point feature learning. Although these methods have achieved satisfying performance for the classification of airborne LiDAR point clouds, they cannot adequately recognize fine-grained local structures due to the uneven density distribution of 3D point clouds. In this paper, to address this challenging issue, we introduce a density-aware convolution module which uses the point-wise density to reweight the learnable weights of convolution kernels. The proposed convolution module can approximate continuous convolution on unevenly distributed 3D point sets. Based on this convolution module, we further develop a multi-scale CNN model with downsampling and upsampling blocks to perform per-point semantic labeling. In addition, to regularize the global semantic context, we implement a context encoding module to predict a global context encoding and formulated a context encoding regularizer to enforce the predicted context encoding to be aligned with the ground truth one. The overall network can be trained in an end-to-end fashion and directly produces the desired classification results in one network forward pass. Experiments on the ISPRS 3D Labeling Dataset and 2019 Data Fusion Contest Dataset demonstrate the effectiveness and superiority of the proposed method for airborne LiDAR point cloud classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call