Abstract

How to quickly obtain information about crisis events on social media such as Twitter and Weibo is crucial for follow-up rescue work and the promotion of postdisaster reconstruction. Therefore, it is very important to obtain useful information through multimodal summary generation technology. The current technology for generating crisis event summaries is mainly affected by unimodal bias and disregards the diversity of information in text and images. To solve these problems, this paper proposes a hierarchical multimodal crisis event summary generation model based on the modal alignment premise and hierarchical thinking. First, the visual context vector and text context vector are obtained, and then the hierarchical multimodal pointer model is employed to generate the text summary. Thus, the modal deviation is solved. Second, to select high-quality images, this paper proposes a dynamic selection strategy, which to some extent considers the requirements of the high correlation between text and images and the diversity of crisis information. Last, the experimental results based on the crisis event data in the MSMO dataset show that the proposed model achieves good performance in the summary generation and image selection of crisis events.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.