Abstract

Accurately identifying rumor information is crucial for efficient information assessment. Pretrained Language Models (PLMs) are trained on large text data, understanding and generating human-like language patterns, and fine-tuning for specific NLP tasks. However, adjusting PLMs typically requires modifications for specific tasks, creating a gap between pretraining and task execution. Prompt learning reduces this gap. We introduce “Prompt Learning for Rumor Detection” (PLRD) based on T5, which utilizes T5-generated prompt templates to transform the detection task into a prompt-driven learning framework. Through precise prompt guidance, PLRD leverages model knowledge, enhancing rumor detection capabilities, especially in data-scarce scenarios. Experimental validation on Weibo and Twitter datasets confirms PLRD's superiority over existing methods, particularly in scenarios with limited data. Comparative analysis against state-of-the-art methods highlights PLRD's competitiveness and advancement in rumor detection.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.