Abstract

The timely and accurate recognition of damage to buildings after destructive disasters is one of the most important post-event responses. Due to the complex and dangerous situations in affected areas, field surveys of post-disaster conditions are not always feasible. The use of satellite imagery for disaster assessment can overcome this problem. However, the textural and contextual features of post-event satellite images vary with disaster types, which makes it difficult to use models that have been developed for a specific disaster type to detect damaged buildings following other types of disasters. Therefore, it is hard to use a single model to effectively and automatically recognize post-disaster building damage for a broad range of disaster types. Therefore, in this paper, we introduce a building damage detection network (BDD-Net) composed of a novel end-to-end remote sensing pixel-classification deep convolutional neural network. BDD-Net was developed to automatically classify every pixel of a post-disaster image into one of non-damaged building, damaged building, or background classes. Pre- and post-disaster images were provided as input for the network to increase semantic information, and a hybrid loss function that combines dice loss and focal loss was used to optimize the network. Publicly available data were utilized to train and test the model, which makes the presented method readily repeatable and comparable. The protocol was tested on images for five disaster types, namely flood, earthquake, volcanic eruption, hurricane, and wildfire. The results show that the proposed method is consistently effective for recognizing buildings damaged by different disasters and in different areas.

Highlights

  • Natural disasters are often highly destructive and unpredictable

  • The input form and loss function are important for the performance of the building damage detection network (BDD-Net)

  • The results of the classification experiments suggest that BDD-Net is able to consistently achieve satisfactory results for a variety of disaster scenarios and demonstrate that the performance of the convolutional neural networks (CNNs) does not degrade for different disaster scenarios

Read more

Summary

Introduction

Natural disasters are often highly destructive and unpredictable. People’s lives can be threatened by these disasters and their property can be looted in the aftermath. Since ground-based manual statistical methods are slow and unsafe (for example, there are often aftershocks after a major earthquake, so it could be very dangerous to conduct field statistics at this time), very high resolution (VHR) satellite imagery is an attractive data source for disaster damage assessment and quick decision support. Such imagery can capture spatially explicit details at a broad scale without the need for manual field research and is, feasible for the rapidly analyzing and mapping of damaged buildings over a large area [1]

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call