Abstract

Social media has become an integral part of our daily lives. During time-critical events, the public shares a variety of posts on social media including reports for resource needs, damages, and help offerings for the affected community. Such posts can be relevant and may contain valuable situational awareness information. However, the information overload of social media challenges the timely processing and extraction of relevant information by the emergency services. Furthermore, the growing usage of multimedia content in the social media posts in recent years further adds to the challenge in timely mining relevant information from social media. In this paper, we present a novel method for multimodal relevancy classification of social media posts, where relevancy is defined with respect to the information needs of emergency management agencies. Specifically, we experiment with the combination of semantic textual features with the image features to efficiently classify a relevant multimodal social media post. We validate our method using an evaluation of classifying the data from three real-world crisis events. Our experiments demonstrate that features based on the proposed hybrid framework of exploiting both textual and image content improve the performance of identifying relevant posts. In the light of these experiments, the application of the proposed classification method could reduce cognitive load on emergency services, in filtering multimodal public posts at large scale.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call