Abstract

Multi-party dialogue machine reading comprehension (MRC) is more challenging than plain text MRC because it involves multiple speakers, more complex information flow interaction, and discourse structure. Previously most researchers focus on decoupling the speaker-aware and utterance-aware information to overcome such difficulties. Based on this, the self- and pseudo-self-supervised prediction auxiliary tasks on speakers and key-utterance are proposed. However, the information interaction among key-utterance, question, and dialogue context was ignored in these works, and there should also be a constraint between the two additional tasks. Herein, we proposed an enhanced key-utterance interaction model. It takes the key-utterance predicted by auxiliary task as prior information. Moreover, the co-attention mechanism is used to capture the critical information interaction among dialogue contexts, question, and key-utterance from the two perspectives of question-to-dialogue and dialogue-to-question, respectively. In addition, we introduced minimizing mutual information (MI) between the two auxiliary tasks to prevent mutual interference and overlap of information. Experimental results show that the proposed model achieves significant improvements than the dialogue MRC baseline models in Molweni and FriendsQA datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call