Abstract

Point cloud semantic segmentation in urban scenes plays a vital role in intelligent city modeling, autonomous driving, and urban planning. Point cloud semantic segmentation based on deep learning methods has achieved significant improvement. However, it is also challenging for accurate semantic segmentation in large scenes due to complex elements, variety of scene classes, occlusions, and noise. Besides, most methods need to split the original point cloud into multiple blocks before processing and cannot directly deal with the point clouds on a large scale. We propose a novel context-aware network (CAN) that can directly deal with large-scale point clouds. In the proposed network, a Local Feature Aggregation Module (LFAM) is designed to preserve rich geometric details in the raw point cloud and reduce the information loss during feature extraction. Then, in combination with a Global Context Aggregation Module (GCAM), capture long-range dependencies to enhance the network feature representation and suppress the noise. Finally, a Context-Aware Upsampling Module (CAUM) is embedded into the proposed network to capture the global perception from a broad perspective. The ensemble of low-level and high-level features facilitates the effectiveness and efficiency of 3D point cloud feature refinement. Comprehensive experiments were carried out on three large-scale point cloud datasets in both outdoor and indoor environments to evaluate the performance of the proposed network. The results show that the proposed method outperformed the state-of-the-art representative semantic segmentation networks, and the overall accuracy (OA) of Tongji-3D, Semantic3D, and S3DIS is 96.01%, 95.0%, and 88.55%, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call