Abstract

 The task of conversational machine reading comprehension (CMRC) is an extension of single-turn machine reading comprehension to multi-turn settings, to relflect the conversational way in which people seek information. The correlations between multiple rounds of questions mean that the conversation history is critical to solving the CMRC task. However, existing CMRC models ignore the interference that arises from using excessive historical information to answer the current question when incorporating the dialogue history into the current question. In this paper, an effective Question Selection Module (QSM) is designed to select most relevant historical dialogues when answering the current question through question coupling and coarse-to-fine matching. In addition, most existing approaches perform memory inference by stacked RNNs at context word level, without considering semantic information flowing in the direction of conversation flow. In view of this problem, we implement sequential recurrent reasoning at the turn level of the dialogue, where the turn information contains all the filtered historical semantics for the current step. We conduct experiments on two benchmark datasets, QuAC and CoQA, released by Stanford University. The results confirm that our model satisfactorily captures the valid history and performs recurrent reasoning, and our model achieves an F1-score of 83.0% on CoQA dataset and 67.8% on QuAC dataset, outperforming the best alternative model by 4.6% on CoQA and 2.7% on QuAC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call