In the field of natural language processing, strong commonsense reasoning abilities are often required for intelligent systems to excel in the task of commonsense question answering (QA). To enhance the interpretability of QA systems, a natural approach is to generate textual explanations in addition to the predicted answers, ensuring the understandability of the results. Recent efforts address this by using a prompted language model (LM) with frozen parameters to generate explanations in a few-shot manner. These explanations are then used as additional context to guide a finetuned LM in making the final decision. However, these methods still underutilize the semantic information embedded in the explanatory text. Consequently, the reasoning models tend to rely on word co-occurrence and the knowledge stored in the model itself rather than fully exploiting the explanations. Therefore, we propose a two-staged Explanation Generation and Language Reasoning Framework (EGLR), Our framework takes advantage of the in-context learning capability of LMs to generate explanations and reformulates the reasoning task based on explanations as a semantic matching problem. Through joint prompting and training, our model can select the most appropriate explanation by comparing multiple explanations. Experimental results on three public datasets demonstrate that our framework achieves superior performance on the full dataset while maintaining performance on out-of-domain scenarios.
Read full abstract