Abstract
In recent years, point cloud segmentation technology has increasingly played a pivotal role in tunnel construction and maintenance. Currently, traditional methods for segmenting point clouds in tunnel scenes often rely on a multitude of attribute information, including spatial distribution, color, normal vectors, intensity, and density. However, the underground tunnel scenes show greater complexity than road tunnel scenes, such as dim light, indistinct boundaries of tunnel walls, and disordered pipelines. Furthermore, issues pertaining to data quality, such as the lack of color information and insufficient annotated data, contribute to the subpar performance of conventional point cloud segmentation algorithms. To address this issue, a 3D point cloud segmentation framework specifically for underground tunnels is proposed based on the Segment Anything Model (SAM). This framework effectively leverages the generalization capability of the visual foundation model to automatically adapt to various scenes and perform efficient segmentation of tunnel point clouds. Specifically, the tunnel is first sliced along its direction on the tunnel line. Then, each sliced point cloud is projected onto a two-dimensional plane. Various projection methods and point cloud coloring techniques are employed to enhance SAM’s segmentation performance in images. Finally, the semantic segmentation of the entire underground tunnel is achieved by a small set of manually annotated semantic labels used as prompts in a progressive and recursive manner. The key feature of this method lies in its independence from model training, as it directly and efficiently addresses tunnel point cloud segmentation challenges by capitalizing on the generalization capability of foundation model. Comparative experiments against classical region growing algorithms and PointNet++ deep learning algorithms demonstrate the superior performance of our proposed algorithm.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.