Abstract

As a core task in 3D scene information extraction, point cloud semantic segmentation is crucial for understanding 3D scenes and environmental perception. While extracting local geometric structural features from point clouds, existing research often overlooks the long-range dependencies present in the scene, making it challenging to fully uncover the long-range contextual features hidden within point clouds. On this basis, we propose a segmentation algorithm (DG-Net) that integrates dual neighborhood features with global spatial-aware. Initially, the local structure information encoding module is designed to learn about local geometric shapes by encoding spatial position and directional features, thus supplementing structural information. Subsequently, a dual neighborhood features complementary module is introduced to merge the geometric structural and semantic features within local neighborhoods, learning local dependencies and capturing distinguishable local contextual features. Finally, these features are relayed to a global spatial-aware module equipped with a gated unit, which dynamically adjusts the weights of features at different stages, effectively modeling long-range dependencies between local structures and finely extracting long-range contextual features. We conducted experiments on benchmark datasets of point cloud scenes, and both quantitative and qualitative results demonstrate that our algorithm can accurately identify small-scale objects with complex geometric structures within scenes, surpassing other mainstream networks in segmentation performance. The mIoU on the S3DIS, Toronto3D, and SensatUrban datasets are 71.9 %, 82.1 %, and 59.8 %, respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.