Abstract
Text classification is a well-developed task in natural language processing. In the medical area, this task is still difficult since the discriminative ability requires domain-specific knowledge, such as high specialization, terminology, and structured relationships. In this work, we leverage a large Chinese medical knowledge graph (KG) to derive the above knowledge and propose graph neural networks (GNN) to make the best use of such knowledge. Particularly, for a medical text, we build two graphs: text-graph, which is based on the occurrences of contextualized words; text-specific knowledge graph, retrieved from KG in light of the common terms between the text and KG. The above two graphs are bridged by the common terms and then merged into a joint one. We propose GNN to learn this graph. As such, our model builds interactions between adjacent nodes, and meanwhile, the medical knowledge can be propagated from KG to text. To enhance the node representations and improve knowledge interaction, we introduce general prior knowledge to the text-graph and domain-specific prior knowledge to the text-specific knowledge graph. We conduct extensive experiments on three medical datasets. The experimental results show that our model outperforms strong baseline methods significantly.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have