Abstract

Representation learning of medical Knowledge Graph (KG) is an important task and forms the fundamental process for intelligent medical applications such as disease diagnosis and healthcare question answering. Therefore, many embedding models have been proposed to learn vector presentations for entities and relations but they ignore three important properties of medical KG: multi-modal, unbalanced and heterogeneous. Entities in the medical KG can carry unstructured multi-modal content, such as image and text. At the same time, the knowledge graph consists of multiple types of entities and relations, and each entity has various number of neighbors. In this paper, we propose a Multi-modal Multi-Relational Feature Aggregation Network (MMRFAN) for medical knowledge representation learning. To deal with the multi-modal content of the entity, we propose an adversarial feature learning model to map the textual and image information of the entity into the same vector space and learn the multi-modal common representation. To better capture the complex structure and rich semantics, we design a sampling mechanism and aggregate the neighbors with intra and inter-relation attention. We evaluate our model on three knowledge graphs, including FB15k-237, IMDb and Symptoms-in-Chinese with link prediction and node classification tasks. Experimental results show that our approach outperforms state-of-the-art method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call