Abstract
Abstract To quickly grasp what interesting topics are happening on web, it is challenge to discover and describe topics from User-Generated Content (UGC) data. Describing topics by probable keywords and prototype images is an efficient human-machine interaction to help person quickly grasp a topic. However, except for the challenges from web topic detection, mining the multi-media description is a challenge task that the conventional approaches can barely handle: (1) noises from non-informative short texts or images due to less-constrained UGC; and (2) even for these informative images, the gaps between visual concepts and social ones. This paper addresses above challenges from the perspective of background similarity remove, and proposes a two-step approach to mining the multi-media description from noisy data. First, we utilize a devcovolution model to strip the similarities among non-informative words/images during web topic detection. Second, the background-removed similarities are reconstructed to identify the probable keywords and prototype images during topic description. By removing background similarities, we can generate coherent and informative multi-media description for a topic. Experiments show that the proposed method produces a high quality description on two public datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.