Abstract

ABSTRACT In recent years, social media platforms have played a critical role in mitigation for a wide range of disasters. The highly up-to-date social responses and vast spatial coverage from millions of citizen sensors enable a timely and comprehensive disaster investigation. However, automatic retrieval of on-topic social media posts, especially considering both of their visual and textual information, remains a challenge. This paper presents an automatic approach to labeling on-topic social media posts using visual-textual fused features. Two convolutional neural networks (CNNs), Inception-V3 CNN and word embedded CNN, are applied to extract visual and textual features respectively from social media posts. Well-trained on our training sets, the extracted visual and textual features are further concatenated to form a fused feature to feed the final classification process. The results suggest that both CNNs perform remarkably well in learning visual and textual features. The fused feature proves that additional visual feature leads to more robustness compared with the situation where only textual feature is used. The on-topic posts, classified by their texts and pictures automatically, represent timely disaster documentation during an event. Coupling with rich spatial contexts when geotagged, social media could greatly aid in a variety of disaster mitigation approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call