Abstract

Medical Dialogue Information Extraction has attracted increasing attention due to its great potential for application in electronic medical record generation, automatic disease diagnosis, etc. Recent methods have achieved considerable success but still suffer from two inherent limitations. On the one hand, a medical dialogue is composed of multiple utterances from two speaker roles, i.e., doctor and patient. Existing methods often ignore the transition and interaction of speaker roles. It makes the model unable to handle personal pronoun ambiguity. On the other hand, traditional methods have a weak ability to capture the information of multi-turn dialogue and lack the guidance of global context. In this paper, we propose a novel Context-Sensitive Deep Matching model for medical information extraction in multi-turn dialogue, dubbed CSDM. Specifically, we first introduce a multi-view aware channel, which exploits multiple mask templates to capture different information in dialogues. Thus, the transition and interaction of speaker roles are considered in the model. Second, we utilize a bi-directional attention mechanism to assess the relative importance of different contexts. Therefore, the proposed model can perceive information of multi-turn dialogue. Extensive experiments on a public benchmark dataset show that our method achieves new state-of-the-art performance, which demonstrates the effectiveness of CSDM.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call