Abstract

Abstract. Deep learning is a powerful tool to extract both individual building and roof plane polygons. But deep learning requires a large amount of labeled data. Hence, publicly available level of detail (LoD)-2 datasets are a natural choice to train fully convolutional neural networks (FCNs) models for both building section and roof plane instance segmentation. Since publicly available datasets are often automatically derived, e.g. based on laser scanning, they lack on annotation accuracy. To complement such a dataset, we introduce manually annotated and synthetically generated data. Manually annotated data is domain-specific and has a high annotation quality but is expensive to obtain. Synthetically generated data has high-quality annotations by definition, but lacks domain-specificity. Moreover, we not only detect individual building section instances, but also roof plane instances. We predict separations not only between individual buildings, but also by a class that describes the line which separates roof planes. The predicted building and roof plane instances are polygonized by a simple tree search algorithm. To achieve more regular polygons, we utilize the Douglas-Peucker polygon simplification algorithm. We describe our dataset in detail to allow comparability between successive methods. To facilitate future works in building and roof plane prediction, our Roof3D dataset is accessible at <code>https://github.com/dlrPHS/GPUB</code>.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call