Abstract

AbstractThis study aims to facilitate a more reliable automated postdisaster assessment of damaged buildings based on the use of multiple view imagery. Toward this, a Multi‐View Convolutional Neural Network (MV‐CNN) architecture is proposed, which combines the information from different views of a damaged building, resulting in 3‐D aggregation of the 2‐D damage features from each view. This spatial 3‐D context damage information will result in more accurate and reliable damage quantification in the affected buildings. For validation, the presented model is trained and tested on a real‐world visual data set of expert‐labeled buildings following Hurricane Harvey. The developed model demonstrates an accuracy of 65% in predicting the exact damage states of buildings, and around 81% considering ±1 class deviation from ground‐truth, based on a five‐level damage scale. Value of information (VOI) analysis reveals that the hybrid models, which consider at least one aerial and ground view, perform better.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call