Abstract Commonsense explanation generation refers to reasoning and explaining why a commonsense statement contradicts commonsense knowledge, such as why the statement “My dad grew volleyballs in his garden” is nonsensical. While such reasoning is trivial for humans, it remains a challenge for AI systems. Despite their notable performance in tasks like text generation and reasoning, large language models (LLMs) often fall short of consistently generating coherent and accurate commonsense explanations. To bridge this gap, we propose a novel Two-stage Identification and Prompting (TIP) framework for enhancing LLMs’ ability to handle the task of commonsense explanation generation. Specifically, in the first stage, TIP identifies the nonsensical concept in the given statement, pinpointing the specific element that contradicts commonsense knowledge. In the second stage, TIP generates implicit knowledge based on the identified nonsensical concept and then leverages this implicit knowledge to guide the adopted LLMs in generating explanations. In order to demonstrate the effectiveness of the proposed TIP framework for commonsense explanation generation, we conducted extensive experiments based on the ComVE dataset and a newly constructed CSE dataset, where a variety of LLMs are evaluated. The experimental results show that TIP consistently outperforms all baseline methods across multiple metrics, demonstrating its effectiveness in improving LLMs’ commonsense reasoning and explanation generation capabilities.
Read full abstract