Abstract

During disaster response and recovery stages, stakeholders including governmental agencies collect disaster's impact information to inform disaster relief, resource allocation, and infrastructure reconstruction. The damage data collected using field surveys and satellite imagery are often not available immediately after a disaster while rapid information is crucial for time-sensitive decision makings. Some researchers turned to social media for real-time situational information of disaster damage. However, existing damage assessment research mostly focused on single data modality (i.e. text or image) and made coarse-grained predictions, which limited their practical applications in assisting city-level operations. The difficulties of retrieving useful information from vast noisy social media data have been outlined by many studies. Thus, we propose a data-driven method to locate and assess disaster damage with massive multimodal social media data. The method splits and processes two data modalities, i.e. texts and images, using two modules. The image analysis module uses five machine learning classifiers that are organized in a hierarchical structure. The text analysis module uses a keyword search-based method. They together mine various damage information including hazard types (e.g. wind and flood), hazard severities, damage types (e.g. infrastructure destruction and housing damage). The method is applied and evaluated with two recent hurricane events. In practice, the method acquires damage information throughout extreme events and supplements conventional damage assessment methods. It enables the rapid damage information access and disaster response for both first responders and the general public. The research effort contributes to achieving more transparent and effective disaster relief activities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call