Indoor point clouds often present significant challenges due to the complexity and variety of structures and high object similarity. The local geometric structure helps the model learn the shape features of objects at the detail level, while the global context provides overall scene semantics and spatial relationship information between objects. To address these challenges, we propose a novel network architecture, PointMSGT, which includes a multi-scale geometric feature extraction (MSGFE) module and a global Transformer (GT) module. The MSGFE module consists of a geometric feature extraction (GFE) module and a multi-scale attention (MSA) module. The GFE module reconstructs triangles through each point’s two neighbors and extracts detailed local geometric relationships by the triangle’s centroid, normal vector, and plane constant. The MSA module extracts features through multi-scale convolutions and adaptively aggregates features, focusing on both local geometric details and global semantic information at different scale levels, enhancing the understanding of complex scenes. The global Transformer employs a self-attention mechanism to capture long-range dependencies across the entire point cloud. The proposed method demonstrates competitive performance in real-world indoor scenarios, with a mIoU of 68.6% in semantic segmentation on S3DIS and OA of 86.4% in classification on ScanObjectNN.
Read full abstract