Abstract

Machine Reading Comprehension with unanswerable questions requires that systems not only answer questions when possible, but also output an unanswerable prediction when there is no answer with the given passage. This task encourages systems for true language understanding instead of just selecting the span that seems most related to the question in the conventional extractive reading comprehension. Previous methods have two weaknesses. First, most of them utilize a simple classifier or a verifiable module to determine whether a question is unanswerable. However, they predict the probability of unanswerable questions directly, which lacks the explicit process of explanation. Second, these methods treat the answer extraction task and the unanswerable MRC task as two independent tasks without considering the logical consistency of their results, which leads to the contradiction between two tasks with opposite results on the same question. In this paper, we propose an AdaPtive Evidence-driven Reasoning Network (APER) which can adaptively choose to extract an answer span or output an unanswerable prediction based on the evidence which is refined by Evidence Refining Reasoner. Furthermore, the APER directly correlates the two tasks and guarantees the logical consistency of their results with the proposed novel logical consistency training objective. Experiments on the SQuAD 2.0 and DuReader demonstrate the superiority and effectiveness of our proposed APER model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call