Abstract

Traditional methods of weakly supervised semantic segmentation (WSsegmentation) for point cloud scenes have several limitations including limited precision and difficulty in handling complex scenes due to imprecise labels or partial annotations. To address these issues, we perform view-based adversarial training on the original point cloud scene samples through view resampling and Gaussian noise perturbation to reduce overfitting. Combining a self-attention mechanism with multi-layer perceptrons and point cloud segmentation strategy, we can perform dimensionality enhancement and dimensionality reduction operations to better capture the local features of point cloud data. Finally, we can obtain the semantic segmentation results of point cloud scene by fusing local and global semantic features. In the design of the network loss function, we combine the Siamese loss and the smoothness loss and the cross-entropy loss to improve the ability and fidelity of semantic networks. Specifically, the Siamese loss is used to compute the distance between different augmented point cloud data in their feature embedding space and the smoothness loss is used to penalize the discontinuity of semantic information between adjacent regions. The proposed weakly supervised segmentation network achieves an overall segmentation accuracy close to fully supervised segmentation methods and outperforms most of the existing weakly supervised segmentation methods by 5% to 10% in scene segmentation in terms of mIoU on S3DIS, ShapeNet, and PartNet datasets. Extensive experiments demonstrate the robustness, effectiveness, and generalization of the proposed point cloud segmentation network.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call