Abstract

ABSTRACT Quick and accurate extraction of un-collapsed buildings from post-disaster High-resolution Remote Sensing Images (HRSIs) is imperative for emergency response. Pre-disaster HRSIs could serve as auxiliary data for training models to expedite this extraction process. However, the effectiveness of models trained directly on pre-disaster HRSIs tends to diminish when applied to post-disaster scenarios, mainly due to the notable discrepancies between these datasets. The current popular approach to mitigate this issue involves aligning features from pre- and post-disaster images using an unsupervised domain adversarial learning framework. However, conventional methods often fall short in reducing the substantial disparity between pre- and post-disaster images, due to a lack of comprehensive alignment of category-level and multi-scale features. To overcome these limitations, we propose the Multi-scale Global and Category-attention Features Alignment Network (MGCAN). This novel approach further refines feature alignment strategies by concurrently aligning both multi-scale global and category-attention features, thus effectively narrowing the gap between pre- and post-disaster HRSIs. Extensive experiments have demonstrated that MGCAN significantly improves the accuracy of un-collapsed building extraction from post-disaster HRSIs. Moreover, compared to other state-of-the-art domain adversarial networks, MGCAN exhibits superior performance in different disaster scenarios.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.