Abstract

ABSTRACT Efficient building damage assessment after disasters is vital for emergency response and loss evaluation, but the task is complicated by diverse building structures and complex environments. Traditional methods using Convolutional Neural Networks (CNNs) struggle to capture global contextual features, limiting damage categorization accuracy. To address this, we introduce the High-Resolution Transformer Architecture for Building Damage Assessment (HRTBDA), which enhances multi-scale feature extraction. A Cross-Attention-Based Spatial Fusion (CSF) module is proposed to utilize the attention mechanism, improving the model’s ability to identify detailed associations in damaged buildings. Additionally, we propose a deep convolution network matching optimization strategy that integrates a multilayer perceptron and expands the receptive field, enhancing global feature perception. HRTBDA’s performance was evaluated on two public datasets and compared with five recent frameworks. The model achieved an F1-score of 86.0% in building localization and 78.4% in damage assessment, with a 4.8% improvement in detecting minor damages. These results demonstrate HRTBDA’s potential for improving building damage assessment and highlight its significant advancements over existing methods.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.