Abstract

As an important basic task in natural language processing, textual entailment recognition has practical applications in QA, information retrieval IR, information extraction and many other tasks. The traditional methods of text imputation include mainly classification methods based on artificial features, methods based on word similarity, and so on. Traditional methods require a large number of manual extraction features, construction rules, and the like. Deep neural networks can avoid the problems of manual extraction features in traditional machine learning methods and error accumulation caused by NLP preprocessing tools. With the good results of deep learning in the field of natural language processing, the study of natural language reasoning and textual inclusion recognition is also increasing. The same text under different tasks will focus on its focus. For the text implied recognition task, we usually focus on whether the sub-events in each text match. If we can decompose a sentence into sub-events related to it, and judge that the sub-events in the hypothetical text are contained in the sub-events of the premise text, then we can determine the implication between the two sentences. The mLSTM model achieved an accuracy of 86.1% on the SNLI corpus, which is the best level at present, but it is not effective on other small corpus. This article aims to improve the mLSTM model, establishing the mGRU model based on GRU (Gated Recurrent Unit), and verify the effect of the model on SNLI corpus and multiNLI corpora.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call