Abstract

With the development of airborne light detection and ranging (LiDAR) technology, it has become a common and efficient way to collect large-scale 3D spatial information. However, efficient and automatic semantic segmentation of LiDAR data, in the form of 3D point clouds, remains a persistent challenge. To address this, a dual attention neural network (DA-Net) is proposed, consisting of two different blocks, namely augmented edge representation (AER) and elevation attentive pooling (EAP). First, the AER can adaptively represent local orientation and position, thereby effectively enhancing geometric information. Second, the captured local features of centroid points are utilized to further encode discriminative features using the EAP with the learned attention scores. Finally, a location homogeneity (LH) module is devised to explore the long-range relationship in an encoder-decoder network. Benefiting from the dual attention module, geometric information hidden in unorganized point clouds can be effectively propagated. Besides, the LH forces the network to pay attention to the semantic consistency of elevated objects, which facilitates both point- and object-level point cloud semantic segmentation for scene understanding. A benchmark dataset is used to assess the proposed method, which achieves an overall accuracy of 85.98% and an average F1 score of 72.31%. In addition, comparisons with other latest deep learning methods on the 2019 Data Fusion Contest dataset further demonstrate the robustness and generalization ability of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call