Abstract

How Deep Neural Networks (DNNs) best cope with the understanding of multimedia contents still remains an open problem, mainly due to two factors. First, conventional DNNs cannot effectively learn the representations of the images with sparse visual information. For example, the images describing knowledge concepts in textbooks. Second, existing DNNs cannot effectively capture the fine-grained interactions between the images and text descriptions. To address these issues, we propose a deep Cross-Media Grouping Fusion Network (CMGFN), which mainly has two distinctive properties: 1) CMGFN can effectively learn visual features from the images with sparse visual information. This is achieved by first progressively adjusting the attention of convolution filters to valuable visual regions, and then enhancing the use of key visual information in feature construction. 2) By a cross-media grouping co-attention mechanism, CMGFN can effectively use the interactions between visual features of different semantics and textual descriptions, to learn cross-media features representing different fine-grained semantics in different groups. Empirical studies demonstrate that CMGFN not only achieves state-of-the-art performance on the multimedia documents containing sparse visual information, but also shows superior general applicability on other multimedia data, e.g., the multimedia fake news.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call