Abstract
Owing to the unanticipated and thereby treacherous nature of disasters, it is essential to gather necessary information and data regarding the same on an urgent basis; this helps to get a detailed overview of the situation and helps humanitarian organizations prioritize their tasks. In this paper, "An Efficient Multi-Modal Classification Approach for Disaster-related Tweets," the proposed framework based on Deep Learning to classify disaster-related tweets by analyzing text and image contents. The approach is based on Gated Recurrent Unit (GRU) and GloVe Embedding for text classification and VGG-16 network for image classification. Finally, a combined model is proposed using both text and image modules by the Late Fusion Technique. This portrays that the proposed multi-modal system performs significantly well in classifying disaster-related content.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.