Abstract

With the massive growth of social events in Internet, it has become more and more difficult to exactly find and organize the interesting events from massive social media data, which is useful to browse, search, and monitor social events by users or governments. To deal with this problem, we propose a novel multi-modal social event tracking and evolution framework to not only effectively capture multi-modal topics of social events, but also obtain the evolutionary trends of social events and generate effective event summary details over time. To achieve this goal, we propose a novel multi-modal event topic model (mmETM), which can effectively model social media documents, including long text with related images, and learn the correlations between textual and visual modalities to separate the visual-representative topics and non-visual-representative topics. To apply the mmETM model to social event tracking, we adopt an incremental learning strategy denoted as incremental mmETM, which can obtain informative textual and visual topics of social events over time to help understand these events and their evolutionary trends. To evaluate the effectiveness of our proposed algorithm, we collect a real-world dataset to conduct various experiments. Both qualitative and quantitative evaluations demonstrate that the proposed mmETM algorithm performs favorably against several state-of-the-art methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call