Abstract

Abstract. This work addresses the automatic reconstruction of objects useful for BIM, like walls, floors and ceilings, from meshed and textured mapped 3D point clouds of indoor scenes. For this reason, we focus on the semantic segmentation of 3D indoor meshes as the initial step for the automatic generation of BIM models. Our investigations are based on the benchmark dataset ScanNet, which aims at the interpretation of 3D indoor scenes. For this purpose it provides 3D meshed representations as collected from low cost range cameras. In our opinion such RGB-D data has a great potential for the automated reconstruction of BIM objects.

Highlights

  • Building Information Modelling, which became popular starting with 2002, is considered to be an intelligent 3D model which focuses on design, construction and management of a building site (Autodesk, 2018)

  • Even though the use of Building Information Models (BIM’s) for the current building constructions is state of the art, there is still ongoing research to automatically create these models starting from the scanning data and to keep them updated in time

  • Due to the fact that a considerable big number of buildings are in the last-mentioned situation, there is a need of automated reconstruction methods which can deliver from input data suitable models that enables the creation of BIM objects

Read more

Summary

INTRODUCTION

Building Information Modelling, which became popular starting with 2002, is considered to be an intelligent 3D model which focuses on design, construction and management of a building site (Autodesk, 2018). Even if point cloud data still remains a standard format for these kind of tasks, 3D meshes and voxel grids seem to be more and All these aspects motivated us to make use of an existing indoor benchmark containing indoor data coming from a low cost sensor in order to classify indoor environments as an important step in BIM creation. The sensor has the possibility of measuring the distance to the surrounding objects in a range of 0.3 – 3.5 m, with an accuracy varying from 0.1-1.1% (Occipital Structure, 2018) The advantages of this benchmark dataset are on one side the big size of the data, enabling different test scenarios and on the other side that it provides raw RGB-D data and the camera poses which enables a volumetric fusion (Curless and Levoy, 1996) and the extraction of the surface mesh.

Mesh segmentation techniques
Scan-to-BIM process
METHODOLOGY
EXPERIMENTS
Findings
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call