Abstract

Natural language inference (NLI) is an increasingly important task of natural language processing, and the explainable NLI generates natural language explanations (NLEs) in addition to label prediction, to make NLI explainable and acceptable. However, NLEs generated by current models often present problems that disobey of commonsense or lack of informativeness. In this paper, we propose a knowledge enhanced explainable NLI framework (KxNLI) by leveraging Knowledge Graph (KG) to address these problems. The subgraphs from KG are constructed based on the concept set of the input sequence. Contextual embedding of input and the graph embedding of subgraphs, is used to guide the NLE generation by using a copy mechanism. Furthermore, the generated NLEs are used to augment the original data. Experimental results show that the performance of KxNLI can achieve state-of-the-art (SOTA) results on the SNLI dataset when the pretrained model is fine-tuned on the augmented data. Besides, the proposed mechanism of knowledge enhancement and rationales utilization can achieve ideal performance on vanilla seq2seq model, and obtain better transfer ability when transferred to the MultiNLI dataset. In order to comprehensively evaluate generated NLEs, we design two metrics from the perspectives of the accuracy and informativeness, to measure the quality of NLEs, respectively. The results show that KxNLI can provide high quality NLEs while making accurate prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call