Abstract

The existing models for multi-modal fake news detection focus mainly on capturing common similar semantics between different modalities to improve detection performance. However, they ignore the extraction of inconsistent features between these modalities. The intuitive cognition way people identify a piece of fake news is generally to discover if there are inconsistent semantics among news content itself and its comments, which could be abstracted as “comparing news image-text consistency - finding valuable comments - reasoning in-/consistency between news and comments”. Inspired by the cognitive process, we propose Human Cognition-based Consistency Inference Networks (HCCIN) to comprehensively explore consistent and inconsistent semantics for multi-modal fake news detection. Specifically, we first design cross-modal alignment layer to learn consistent semantics between textual and visual information within the multi-modal news, and then the comment clue discovery layer is devoted to ascertaining the most-concerned semantics by audiences between comments. Finally, we develop collaborative inference layer to drive news consistent semantics and the most-concerned semantics to reason and discover consistent and inconsistent information between them. Experiments on three public datasets, including Weibo, Twitter, and PHEME, reveal the superiority of our HCCIN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call