Abstract

The current practice for creating as-built geometric Digital Twins (gDTs) of industrial facilities is both labour-intensive and error-prone. In aged industries it typically involves manually crafting a CAD or BIM model from a point cloud collected using terrestrial laser scanners. Recent advances within deep learning (DL) offer the possibility to automate semantic and instance segmentation of point clouds, contributing to a more efficient modelling process. DL networks, however, are data-intensive, requiring large domain-specific datasets. Producing labelled point cloud datasets involves considerable manual labour, and in the industrial domain no open-source instance segmentation dataset exists. We propose a semi-automatic workflow leveraging object descriptions contained in existing gDTs to efficiently create semantic- and instance-labelled point cloud datasets. To prove the efficiency of our workflow, we apply it to two separate areas of a gas processing plant covering a total of 40000m2. We record the effort needed to process one of the areas, labelling a total of 260 million points in 70 h. When benchmarking on a state-of-the-art 3D instance segmentation network, the additional data from the 70-hour effort raises mIoU from 24.4% to 44.4%, AP from 19.7% to 52.5% and RC from 45.9% to 76.7% respectively.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.