Accurately identifying rumor information is crucial for efficient information assessment. Pretrained Language Models (PLMs) are trained on large text data, understanding and generating human-like language patterns, and fine-tuning for specific NLP tasks. However, adjusting PLMs typically requires modifications for specific tasks, creating a gap between pretraining and task execution. Prompt learning reduces this gap. We introduce “Prompt Learning for Rumor Detection” (PLRD) based on T5, which utilizes T5-generated prompt templates to transform the detection task into a prompt-driven learning framework. Through precise prompt guidance, PLRD leverages model knowledge, enhancing rumor detection capabilities, especially in data-scarce scenarios. Experimental validation on Weibo and Twitter datasets confirms PLRD's superiority over existing methods, particularly in scenarios with limited data. Comparative analysis against state-of-the-art methods highlights PLRD's competitiveness and advancement in rumor detection.