Plant diseases significantly impact the quality and yield of agricultural products, leading to considerable economic losses. Most existing plant disease recognition systems are limited to identifying categories within the training set, which poses potential systemic risks. Rejecting unknown samples is crucial for the safety and reliability of practical applications. This study aims to harness the strong generalization capabilities of vision-language models to address plant disease anomaly detection. To this end, we comprehensively explore prompt tuning paradigms based on vision-language models. We observe that anomaly detection methods guided by textual concepts perform poorly in the fine-grained task of plant disease due to their focus on concept matching. We argue that visual information is crucial for anomaly detection in plant diseases. Therefore, we propose guiding the vision-language model with visual information to address this issue. Additionally, we find that utilizing the general knowledge extracted by the original vision-language model can further enhance anomaly detection performance. Extensive experimental results demonstrate that our method significantly improves the current baseline methods by incorporating visual information. Notably, deploying our method under vision-language prompt tuning achieved an AUROC score of 99.85% in the all-shot setting. Even in a challenge 2-shot setting, our approach achieves an AUROC score of 93.81%, significantly outperforming CoCoOp fine-tuned on the entire dataset (88.61%). We believe that our study will contribute to the community and, to fuel the field, our code will be released.