Abstract

With the recent tremendous improvements in the spatial, spectral, and temporal resolutions of remote sensing imaging systems, there has been a dramatic increase in their applications. Amongst different applications of very high-resolution remote sensing images, damage detection for rapid emergency response is one of the most challenging ones. Recently, deep learning frameworks have enhanced the performance of earthquake damage detection by automatic extraction of strong deep features. However, most of the existing studies in this area focus on using nadir satellite images or orthophotos which limits the available data sources. The objective of this study is to present a multi-modal integrated structure to combine orthophoto and off-nadir images for earthquake building damage detection. In this context, a multi-feature fusion method based on deep transfer learning is presented, which contains four different steps, namely pre-processing, deep feature extraction, deep feature fusion, and transfer learning. To validate the presented framework, two comparative experiments are conducted on the 2010 Haiti earthquake, using pre- and post-event off-nadir satellite images, which were collected by WorldView-2 (WV-2) satellite platform as well as a post-event airborne orthophoto. The results demonstrate considerable advantages in identifying damaged and non-damaged buildings with over 83% for the overall accuracy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call