The stable operation of the power system is closely related to the national economy and people’s livelihoods. Therefore, the timely detection, qualitative assessment, and handling of major equipment defects are crucial. The classification of defect levels in main electrical equipment is a fundamental task in this process, which is often manually completed, supplemented by knowledge bases or expert systems. However, this approach is time-consuming, labor-intensive, involves challenging human–machine interaction, and relies on expert experience. Conversational large language models, such as ChatGPT, ERNIE Bot, ChatGLM, have garnered widespread recognition in various domains. However, these models may have errors in the reasoning process, resulting in biased or even erroneous outputs, which is referred to as the “hallucinations”. The hallucinations’ problem of large language models poses challenges in specific fields. To mitigate the hallucinations in large language models, researchers often seek to incorporate domain-specific knowledge into these models through methods like fine-tuning or prompt learning. In order to enhance model performance while minimizing computational costs, this study adopts the prompt learning approach. Specifically, we propose a large language model prompt learning framework based on knowledge graphs, aiming to provide the large language model with reasoning support by leveraging specific information stored within the knowledge graph and receive explainable reasoning result. Experimental results demonstrate that our module achieved superior results on the power defect dataset compared to the non-prompt method.
Read full abstract