Abstract

This paper aims at medical text classification, where texts describe medicines, diseases, or other medical topics. This field is still challenging since medical texts contain intensive specialization and terminology, which require professional semantic and structured knowledge to classify. Based on our observations, medical knowledge graph (KG) can provide such knowledge although they may be ambiguous. To this end, we propose contrastive knowledge integrated graph neural networks (ConKGNN) to make full use of the above knowledge. Specifically, the proposed method builds two graphs for a medical text, i.e. text graph and text-specific subgraph, containing the text information and relevant KG information, respectively. Two graphs are merged into a united graph, which is jointly modeled by graph neural networks (GNN). In this way, our approach adequately learns interactions between neighbors. Meanwhile, it promotes the mutual influences between text and KG. We further propose graph-based supervised contrastive learning. By randomly cutting off nodes from the text graph, an augmented united graph is obtained. Learning it in a contrastive way could enhance the robustness of introducing KG information. Comprehensive experiments are conducted on five Chinese medical datasets and experimental results show our model outperforms strong baselines remarkably. Consequently, our model can serve as an efficient medical text classifier with excellent performance. We release the code at https://github.com/nolongernome/ConKGNN.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call