Abstract

One of the most important disaster management requirements is accurate damage map generation to support rescue and reconstruction efforts. In this application, remote sensing images play a significant role because of the great details provided by their high spatial, spectral, and temporal resolutions; thus, the literature is rich with studies that use pre- and post-event images along with geospatial machine learning techniques for automatic damage mapping. However, acquiring proper pre-event data can be challenging due to the unpredictable nature of hazards. In this paper, we customize a pre-trained version of the residual neural network with 34 layers (ResNet-34) to identify damaged buildings by using only post-event high-resolution remote sensing images. For evaluating the damage detection framework efficiency, airborne orthophotos of the 2010 Haiti earthquake and the 2018 Woolsey fire are utilized. The network identified damaged and non-damaged buildings with over 91 % overall accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.