Abstract

Abstract. The collapse of buildings is a major factor in the casualties and economic losses of earthquake disasters, and the degree of building collapse is an important indicator for disaster assessment. In order to improve the classification of collapsed building coverings (CBC), a new fusion technique was proposed to integrate optical data and SAR data at the pixel level based on manifold learning.Three typical manifold learning models, namely, Isometric Mapping(ISOMAP), Local Linear Embedding (LLE) and principle component analysis (PCA), were used, and their results were compared. Feature extraction were employed from SPOT-5 data with RADARSAT-2 data. Experimental results showed that 1) the most useful features of the optical and SAR data were contained in manifolds with low-intrinsic dimensionality, while various CBC classes were distributed differently throughout the low- dimensionality spaces of manifolds derived from different manifold learning models; 2) in some cases, the performance of Isomap is similar to PCA, but PCA generally performed the best in this study, yielding the best accuracy of all CBC classes and requiring the least amount of time to extract features and establish learning; and 3) the LLE-derived manifolds yielded the lowest accuracy, mainly by confusing soil with collapsed building and rock. These results show that the manifold learning can improve the effectiveness of CBC classification by fusing the optical and SAR data features at the pixel level, which can be applied in practice to support the accurate analysis of earthquake damage.

Highlights

  • LARGE-SCALE earthquakes severely damage people’s lives and properties

  • It has experienced that ENVISAT in the central Dujiangyan area of the Wenchuan earthquake in 2008 Aasar acquired the data before and after the earthquake and the optical image of IKONOS after the earthquake,the detection results of the intensity correlation changes of SAR image before and after the earthquake and IKONOS Image after the earthquake were classified as a band combination, and the damaged buildings were extracted with the accuracy of 81.3% (Xue Tengfei et al.,2012)

  • This study aims to propose a new methodological framework at the pixel level to fuse the features of optical and SAR data to improve collapsed building covers (CBC) classification based on manifold learning

Read more

Summary

Introduction

LARGE-SCALE earthquakes severely damage people’s lives and properties. Fast, accurate, and effective earthquake disaster monitoring and evaluation using airborne and spaceborne remote sensing provides an important scientific basis and decision-making support for government emergency command and postdisaster reconstruction.ERS-SAR data before and after the Kobe earthquake in 1995, were analyzed the information of intensity change and coherence coefficient, and studied that the extraction accuracy of earthquake damaged buildings is higher than any single information(Matsuoka et al.2004); The intensity and coherence of SAR image before and after the change were Jointly detected,and added GIS information of research area to detect the change information, the accuracy of the extraction was obviously higher than that of the single information(Gamba et al.,2007); The intensity change information of alos-palsar data were used before and after the Wenchuan earthquake in 2008 to combine with the coherence coefficient to detect and analyze the change of the building damage in Dujiangyan urban area, and the extraction results were highly consistent with the field survey results(Gong Lixia et al.,2016);High-resolution optical image and SAR image were integrated to explore to measure the seismic damage information of buildings, using this method of optical image vectorization to extract the building mask, superimposing the vectorized mask data on the SAR image intensity difference value map, so as to improve the accuracy of building seismic damage extraction (Chini et al.,2009); Taizi port area in the 2010 Haiti earthquake was taken as an example, It has experienced a number of buildings on the IKONOS optical image before the earthquake, and carried out SAR simulation imaging as an image of intact buildings before the earthquake, it is used to detect the changes with the onboard SAR images of COSMO-SKYMED and RADARSAT-2 in the area after the earthquake. It has experienced that ENVISAT in the central Dujiangyan area of the Wenchuan earthquake in 2008 Aasar acquired the data before and after the earthquake and the optical image of IKONOS after the earthquake,the detection results of the intensity correlation changes of SAR image before and after the earthquake and IKONOS Image after the earthquake were classified as a band combination, and the damaged buildings were extracted with the accuracy of 81.3% (Xue Tengfei et al.,2012). In the fusion of optical data and SAR data, various features extracted from optical data and SAR data can form high-dimensional data sets Learning this low dimensional and unique information can be regarded as the fusion process of two data sources.

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.