• A novel network for semantic segmentation of large-scale urban point clouds. • A self-attention-based channel-wisely enhanced global feature is constructed. • A weighted semantic mapping module for highly accurate semantic segmentation. Point clouds of large-scale urban street scenes contain large quantities of object categories and rich semantic information. The semantic segmentation is the basis and key to subsequent essential applications, such as digital twin engineering and city information model. The global feature of point clouds in large-scale scenes can provide long-range context information, which is critical to high-quality semantic segmentation. However, the learning of global spatial saliency considering class label constraints is often ignored in the feature representation of some deep learning models. With regard to this, we propose a Global Feature Self-Attention Encoding (GFSAE) module and a Weighted Semantic Mapping (WSM) module to make the semantic segmentation model of point clouds in large-scale urban street scene focus more on the global salient feature expression by self-attention enhancement channel by channel and take into account the constraints of semantic categories to learn a better semantic segmentation model for urban street scenes. The experiments are performed on the Semantic3D dataset and our own collected vehicle Mobile Laser Scanning (MLS) point cloud dataset. The segmentation results show that the GFSAE and the WSM proposed by us can improve the semantic segmentation of point clouds in large-scale urban street scenes and prove the effectiveness of our model compared with other state-of-the-art methods.
Read full abstract