Abstract

Automatic damage assessment by analysing UAV-derived 3D point clouds provides fast information on the damage situation after an earthquake. However, the assessment of different damage grades is challenging given the variety in damage characteristics and limited transferability of methods to other geographic regions or data sources. We present a novel change-based approach to automatically assess multi-class building damage from real-world point clouds using a machine learning model trained on virtual laser scanning (VLS) data. Therein, we (1) identify object-specific point cloud-based change features, (2) extract changed building parts using k-means clustering, (3) train a random forest machine learning model with VLS data based on object-specific change features, and (4) use the classifier to assess building damage in real-world photogrammetric point clouds. We evaluate the classifier with respect to its capacity to classify three damage grades (heavy, extreme, destruction) in pre-event and post-event point clouds of an earthquake in L’Aquila (Italy). Using object-specific change features derived from bi-temporal point clouds, our approach is transferable with respect to multi-source input point clouds used for model training (VLS) and application (real-world photogrammetry). We further achieve geographic transferability by using simulated training data which characterises damage grades across different geographic regions. The model yields high multi-target classification accuracies (overall accuracy: 92.0%–95.1%). Classification performance improves only slightly when using real-world region-specific training data (< 3% higher overall accuracies). We consider our approach especially relevant for applications where timely information on the damage situation is required and sufficient real-world training data is not available.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call