Abstract

Fully convolutional networks (FCN) such as UNet and DeepLabv3+ are highly competitive when being applied in the detection of earthquake-damaged buildings in very high-resolution (VHR) remote sensing images. However, existing methods show some drawbacks, including incomplete extraction of different sizes of buildings and inaccurate boundary prediction. It is attributed to a deficiency in the global context-aware and inaccurate correlation mining in the spatial context as well as failure to consider the relative positional relationship between pixels and boundaries. Hence, a detection method for earthquake-damaged buildings based on the object contextual representations (OCR) and boundary enhanced loss (BE loss) was proposed. At first, the OCR module was separately embedded into high-level feature extractions of the two networks DeepLabv3+ and UNet in order to enhance the feature representation; in addition, a novel loss function, that is, BE loss, was designed according to the distance between the pixels and boundaries to force the networks to pay more attention to the learning of the boundary pixels. Finally, two improved networks (including OB-DeepLabv3+ and OB-UNet) were established according to the two strategies. To verify the performance of the proposed method, two benchmark datasets (including YSH and HTI) for detecting earthquake-damaged buildings were constructed according to the post-earthquake images in China and Haiti in 2010, respectively. The experimental results show that both the embedment of the OCR module and application of BE loss contribute to significantly increasing the detection accuracy of earthquake-damaged buildings and the two proposed networks are feasible and effective.

Highlights

  • This article is an open access articleTimely and accurately acquiring of earthquake damage information of buildings based on remote sensing images is of great significance for post-earthquake emergency response and post-disaster reconstruction [1,2]

  • Based on DeepLabv3+ and UNet networks, we propose an earthquake-damaged buildings detection method in very high-resolution (VHR) remote sensing images based on object context and boundary enhanced loss (BE loss)

  • We develop the improved DeepLabv3+ and UNet networks embedded with the object contextual representations (OCR) module respectively, which realize the significant enhancement of the feature representation ability

Read more

Summary

Introduction

And accurately acquiring of earthquake damage information of buildings based on remote sensing images is of great significance for post-earthquake emergency response and post-disaster reconstruction [1,2]. The automatic detection technology for earthquake-damaged buildings using very high-resolution (VHR) remote sensing images has become a research hotspot in computer vision. Compared with traditional machine learning methods, approaches based on deep learning are able to automatically extract high distinguish degree and representative abstract features that are crucial for the application in the detection of earthquake-damaged buildings. Among these approaches, the classical convolutional neural network (CNN)

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.