Abstract

Remarkable success has been achieved in the last few years on some limited machine reading comprehension (MRC) tasks. However, it is still difficult to interpret the predictions of existing MRC models. In this paper, we focus on extracting evidence sentences that can explain or support the answers of multiple-choice MRC tasks, where the majority of answer options cannot be directly extracted from reference documents. Due to the lack of ground truth evidence sentence labels in most cases, we apply distant supervision to generate imperfect labels and then use them to train an evidence sentence extractor. To denoise the noisy labels, we apply a recently proposed deep probabilistic logic learning framework to incorporate both sentence-level and cross-sentence linguistic indicators for indirect supervision. We feed the extracted evidence sentences into existing MRC models and evaluate the end-to-end performance on three challenging multiple-choice MRC datasets: MultiRC, RACE, and DREAM, achieving comparable or better performance than the same models that take as input the full reference document. To the best of our knowledge, this is the first work extracting evidence sentences for multiple-choice MRC.

Highlights

  • There have been increased interests in machine reading comprehension (MRC)

  • We mainly focus on multiple-choice MRC (Richardson et al, 2013; Mostafazadeh et al, 2016; Ostermann et al, 2018): given a document and a question, the task aims to select the correct answer option(s) from a small number of answer options associated with this ques

  • Inspired by Integer Linear Programming models (ILP) for summarization (Berg-Kirkpatrick et al, 2011; Boudin et al, 2015), we model evidence sentence extraction as a maximum coverage problem and define the value of a selected sentence set as the sum of the weights for the unique words it contains

Read more

Summary

Introduction

There have been increased interests in machine reading comprehension (MRC). We mainly focus on multiple-choice MRC (Richardson et al, 2013; Mostafazadeh et al, 2016; Ostermann et al, 2018): given a document and a question, the task aims to select the correct answer option(s) from a small number of answer options associated with this ques-. Existing multiple-choice MRC models (Wang et al, 2018b; Radford et al, 2018) take as input the entire reference document and seldom offer any explanation, making interpreting their predictions extremely difficult. It is a natural choice for human readers to use sentences from a given text to explain why they select a certain answer option in reading tests (Bax, 2013). As a preliminary attempt, we focus on exacting evidence sentences that entail or support a question-answer pair from the given reference document

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.