Abstract

In recent days, humanitarian organizations rely on social media like Twitter for situational awareness during the disaster. Millions of tweets are posted on Twitter in the form of text or images or both. Existing works showed that both image and text give complementary information during the disaster. Multi-modal informative tweet detection is helpful to both government and non-government organizations which remains a challenging task during the disaster. However, most of the existing works focused on either text or image content, but not both. In this paper, we propose a novel method based on the combination of fine-tuned BERT and DenseNet models for identifying the multi-modal informative tweets during the disaster. Fine-tuned BERT model is used to extract the linguistic, syntactic and semantic features which help deep understanding of the informative text present in the multi-modal tweet. On the other hand, the fine-tuned DenseNet model is used to extract the sophisticated features from the image. Different experiments are performed on a vast number of data-sets such as Hurricane Harvey, Hurricane Irma, Hurricane Maria, California Wildfire, Sri Lanka floods and Iraq–Iran Earthquake. Experimental results demonstrate that the proposed method outperforms the state-of-the-art method on different parameters. It is the first attempt, to the best of our knowledge, to detect multi-modal informative tweets using the combination of fine-tuned BERT and DenseNet models, where at least any text or image is informative during the disaster.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call