Abstract
The increase in both the frequency and magnitude of natural disasters, coupled with recent advancements in artificial intelligence, has introduced prospects for investigating the potential of new technologies to facilitate disaster response processes. Preliminary Damage Assessment (PDA), a labor-intensive procedure necessitating manual examination of residential structures to ascertain post-disaster damage severity, stands to significantly benefit from the integration of computer vision-based classification algorithms, promising efficiency gains and heightened accuracy. Our paper proposes a Vision Transformer (ViT)-based model for classifying damage severity, achieving an accuracy rate of 95%. Notably, our model, trained on a repository of over 18,000 ground-level images of homes with damage severity annotated by damage assessment professionals during the 2020–2022 California wildfires, represents a novel application of ViT technology within this domain. Furthermore, we have open sourced this dataset—the first of its kind and scale—to be used by the research community. Additionally, we have developed a publicly accessible web application prototype built on this classification algorithm, which we have demonstrated to disaster management practitioners and received feedback on. Hence, our contribution to the literature encompasses the provision of a novel imagery dataset, an applied framework from field professionals, and a damage severity classification model with high accuracy.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.