Abstract

Semantic labeling is an essential but challenging task when interpreting point clouds of 3D scenes. As a core step for scene interpretation, semantic labeling is the task of annotating every point in the point cloud with a label of semantic meaning, which plays a significant role in plenty of point cloud related applications. For airborne laser scanning (ALS) point clouds, precise annotations can considerably broaden its use in various applications. However, accurate and efficient semantic labeling is still a challenging task, due to the sensor noise, complex object structures, incomplete data, and uneven point densities. In this work, we propose a novel neural network focusing on semantic labeling of ALS point clouds, which investigates the importance of long-range spatial and channel-wise relations and is termed as global relation-aware attentional network (GraNet). GraNet first learns local geometric description and local dependencies using a local spatial discrepancy attention convolution module (LoSDA). In LoSDA, the orientation information, spatial distribution, and elevation information are fully considered by stacking several local spatial geometric learning modules and the local dependencies are learned by using an attention pooling module. Then, a global relation-aware attention module (GRA), consisting of a spatial relation-aware attention module (SRA) and a channel relation-aware attention module (CRA), is presented to further learn attentions from the structural information of a global scope from the relations and enhance high-level features with the long-range dependencies. The aforementioned two important modules are aggregated in the multi-scale network architecture to further consider scale changes in large urban areas. We conducted comprehensive experiments on three ALS point cloud datasets to evaluate the performance of our proposed framework. The results show that our method can achieve higher classification accuracy compared with other commonly used advanced classification methods. For the ISPRS benchmark dataset, our method improves the overall accuracy (OA) to 84.5 % and the average F1 measure (AvgF1) to 73.6 %, which outperforms other baselines. Besides, experiments were conducted using a new ALS point cloud dataset covering highly dense urban areas and a newly published large-scale dataset.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call