Accurate and timely information is essential for coordinating an effective disaster response. Traditional methods have struggled to efficiently categorize disaster events and assess damage severity due to the variety and complexity of data sources. Previous research has focused on specific tasks, such as information gathering or humanitarian assistance, but has not adequately addressed the assessment of disaster damage severity. This paper proposes a hybrid learning model to improve disaster event classification and damage severity identification. The model combines image and text data in a cooperative way, using ResNet50 to extract features from images and a LSTM with attention mechanism to learn sequences from text. This combination allows for a more contextual and informative representation of the input data. Compared to existing approaches, the proposed multimodal approach achieves significantly better results in disaster event classification. Apart from the proposed model also shows promising outcome for damage severity of disaster. These advancements are especially important for real-world applications such as disaster management and response coordination, where accuracy and reliability are essential. The comprehensive methodology and empirical results presented in this paper demonstrate the effectiveness and potential of using hybrid learning models to leverage multimodal data for unique and sophisticated analytical tasks in disaster scenarios
Read full abstract