Abstract

Machine Reading Comprehension (MRC) is a challenging nature language processing (NLP) task and has become enormously popular, which aims to guide the machine to comprehend unstructured textual data and answer the given question. Recently, there has been much progress in single-document MRC, However, MRC in the real-world usually requires the machine to answer questions using analyzing multiple documents not only a single document, especially for real web search engine. Compared with single-document MRC (MRC task on a single document), multi-document MRC is much more challenging, since it needs to get multiple verifiable answer candidates from different documents. In the paper, we address the problem by using multi-task learning. We present a multi-task neural network framework for multi-document MRC task, in which different loss functions are used for multiple task objectives. Extensive experiments on publicly available large-scale multiple documents MRC dataset (i.e., MS-MARCO [1]) demonstrate the effectiveness of proposed model. Empirical results show that our proposed model outperforms all tested baselines with a wide margin and achieves satisfactory performance on MS-MARCO, and we also achieves better results on another challenging MRC dataset (i.e., SearchQA [2]).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call