Abstract

Amid the proliferation of misinformation on social networks, automated rumor detection has emerged as a pivotal and pressing research domain. Nonetheless, current methodologies are hindered by constrained feature representations and limited adaptability in effectively addressing diverse and unconventional rumors. The incorporation of large-scale language models holds the promise of delivering heightened semantic comprehension and broader adaptability. Regrettably, prevailing general-purpose prompting approaches frequently fall short in furnishing adequate domain-specific context and guidance, thereby restricting their utility in the context of rumor detection. To ameliorate these concerns, we introduce the Knowledge-Powered Prompting strategy, which imparts task-relevant prompts and context to the model by amalgamating domain expertise with large-scale language models. This fusion equips the model to better align with the exigencies of rumor detection, mitigating the challenges posed by sensitivity to semantic subtleties and a paucity of training samples. In particular, we devise exploration prompts and bolster the prompt representation with a dynamic knowledge injection module, thereby facilitating profound reasoning about pivotal entities. Subsequently, we extract valuable external knowledge through the filtration of interactions between knowledge and claim, thereby diminishing the impact of noise. Concurrently, we undertake joint optimization, encompassing multi-task prompt population and categorical judgment objectives, fostering synergistic semantic modeling and discriminative assessments. Empirical evaluations reveal that our methodology substantially outperforms existing models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call