Abstract

With various Pre-trained Language Models (PLMs) blooming, Machine Reading Comprehension (MRC) systems have embraced significant improvements on various benchmarks and even surpassed human performances. However, most existing works only focus on the accuracy of the answer predictions and neglect the importance of the explanations for the prediction, which is a big obstacle when utilizing these models in real-life applications to convince humans. This paper proposes a novel unsupervised self-explainable framework, called Recursive Dynamic Gating (RDG), for the machine reading comprehension task. The main idea is that the proposed system tries to use less passage information and achieves similar results to the system that uses the whole passage, while the filtered passage is used as text explanations. We carried out experiments on three multiple-choice MRC datasets (including English and Chinese) and found that the proposed system can not only achieve better performance in answer prediction but also provide informative explanations compared to the attention mechanism.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call