Abstract
In the field of natural language processing, strong commonsense reasoning abilities are often required for intelligent systems to excel in the task of commonsense question answering (QA). To enhance the interpretability of QA systems, a natural approach is to generate textual explanations in addition to the predicted answers, ensuring the understandability of the results. Recent efforts address this by using a prompted language model (LM) with frozen parameters to generate explanations in a few-shot manner. These explanations are then used as additional context to guide a finetuned LM in making the final decision. However, these methods still underutilize the semantic information embedded in the explanatory text. Consequently, the reasoning models tend to rely on word co-occurrence and the knowledge stored in the model itself rather than fully exploiting the explanations. Therefore, we propose a two-staged Explanation Generation and Language Reasoning Framework (EGLR), Our framework takes advantage of the in-context learning capability of LMs to generate explanations and reformulates the reasoning task based on explanations as a semantic matching problem. Through joint prompting and training, our model can select the most appropriate explanation by comparing multiple explanations. Experimental results on three public datasets demonstrate that our framework achieves superior performance on the full dataset while maintaining performance on out-of-domain scenarios.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.