Abstract

An important and effective method for the preliminary mitigation and relief of an earthquake is the rapid estimation of building damage via high spatial resolution remote sensing technology. Traditional object detection methods only use artificially designed shallow features on post-earthquake remote sensing images, which are uncertain and complex background environment and time-consuming feature selection. The satisfactory results from them are often difficult. Therefore, this study aims to apply the object detection method You Only Look Once (YOLOv3) based on the convolutional neural network (CNN) to locate collapsed buildings from post-earthquake remote sensing images. Moreover, YOLOv3 was improved to obtain more effective detection results. First, we replaced the Darknet53 CNN in YOLOv3 with the lightweight CNN ShuffleNet v2. Second, the prediction box center point, XY loss, and prediction box width and height, WH loss, in the loss function was replaced with the generalized intersection over union (GIoU) loss. Experiments performed using the improved YOLOv3 model, with high spatial resolution aerial remote sensing images at resolutions of 0.5 m after the Yushu and Wenchuan earthquakes, show a significant reduction in the number of parameters, detection speed of up to 29.23 f/s, and target precision of 90.89%. Compared with the general YOLOv3, the detection speed improved by 5.21 f/s and its precision improved by 5.24%. Moreover, the improved model had stronger noise immunity capabilities, which indicates a significant improvement in the model’s generalization. Therefore, this improved YOLOv3 model is effective for the detection of collapsed buildings in post-earthquake high-resolution remote sensing images.

Highlights

  • Acquiring information on building damage immediately after an earthquake is key to rescue and reconstruction efforts [1]

  • The three models, i.e., YOLOv3, YOLOv3-ShuffleNet, and YOLOv3-S-generalized intersection over union (GIoU), compared in this study were trained with the same dataset

  • The loss curves for the three models had several differences, i.e., the timing of the violent jitter for the YOLOv3-S-GIoU’s loss value during the early stage was shorter than that of the other two models, with a smaller jitter amplitude

Read more

Summary

Introduction

Acquiring information on building damage immediately after an earthquake is key to rescue and reconstruction efforts [1]. For the extraction of information on building damage from remote sensing images, previous studies have investigated numerous methods, which can currently be divided into multi- and single-temporal evaluation methods. The multi-temporal evaluation method is mainly based on detecting changes to evaluate the information on building damage. Gong et al [6] used high-resolution remote sensing images from before and after the 2010 Yushu earthquake as examples for the extraction of information on building damage based on the object-oriented change detection, pixel-based change detection, and principal component analysis-based change detection methods. For the single-temporal evaluation method, data acquired via remote sensing after an earthquake has less constraints, such that it has become an effective technical means that can be directly used to extract and evaluate information on building damage [8]. Janalipour et al [9] used high spatial resolution remote sensing images as background to manually select and extract features based on the fuzzy genetic algorithm, establishing a semi-automatic detection system for building damage

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call