Abstract

Compared to the traditional single-turn question answering (QA) based on text, also known as machine reading comprehension (MRC), conversational MRC tasks aim to enable models to understand the question according to not only the information in the passage but also the previous question-answer pairs. In this paper, we introduce a directional attention weaving (DAW) mechanism to represent dialog context more comprehensively in two aspects. First, we use self-attention on each level of the question-aware passage hidden representation to collect global information for each question as conversation history input, which is referred to multi-round self-alignment. Second, different from existing methods using forward recurrent neural networks (RNNs) to pass the limited dialog context information, which is inefficient to deal with long conversation history and topic return, a unidirectional attention mechanism is proposed to build the history-aware passage representations by computing the attentions along the dialog progression. The DAW mechanism is used to build our model named CANet. We evaluate the proposed model on the CoQA dataset and achieve competitive results in published models, outperforming the published baselines on the hidden test dataset by a substantial margin. The further analysis shows that our DAW mechanism can be flexibly applied to powerful single-turn MRC models, and effective to augment the representations of dialog history to cater to long conversation and topic return settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call