Abstract

Pre-trained language models (PLMs) have achieved outstanding performance on Machine Reading Comprehension (MRC) tasks, but these models’ interpretability remains uncertain. In this paper, we exploit the strengths of the pre-trained T5 (Text-to-Text Transfer Transformer) model on evidence inference to improve the interpretability of the MRC model and propose an interpretable reading comprehension model based on T5, which is trained on a more accurate evidence corpus and can infer precise interpretations for the generated answers. First, we propose a novel T5-based Semantic Textual Similarity (STS) model to label training evidences more precisely, including label reconstruction and data augmentation. Then, we propose a T5-based interpretable reading comprehension model with more accurate evidence training, including a threshold-based method to filter out erroneous evidence during model training. Experiments show that our model significantly outperforms the baseline BERT (Pseudo-data Training) model by 8.7 and 8.0 points for the evidence F1-score of SQuAD1.1 respectively in the base and large levels. Our code is available at https://github.com/MN-Guan/T5-InterMRC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call