Abstract

Medical knowledge graphs have emerged as essential tools for representing complex relationships among medical entities. However, existing methods for learning embeddings from medical knowledge graphs, such as DistMult, RotatE, ConvE, InteractE, JointE, and ConvKB, may not adequately capture the unique challenges posed by the domain, including the heterogeneity of medical entities, rich hierarchical structures, large-scale, high-dimensionality, and noisy and incomplete data. In this study, we propose an Adaptive Hierarchical Transformer with Memory (AHTM) model, coupled with a teacher–student model compression approach, to effectively address these challenges and learn embeddings from a rich medical knowledge dataset containing diverse entities and relationship sets. We evaluate the AHTM model on this newly constructed “Med-Dis” dataset and demonstrate its superiority over baseline methods. The AHTM model achieves substantial improvements in Mean Rank (MR) and Hits@10 values, with the highest MR value increasing by nearly 56% and Hits@10 increasing by 39%. Furthermore, we observe similar performance enhancements on the “FB15K-237” and “WN18RR” datasets. Our model compression approach, incorporating knowledge distillation and weight quantization, effectively reduces the model’s storage and computational requirements, making it suitable for resource-constrained environments. Overall, the proposed AHTM model and compression techniques offer a novel and effective solution for learning embeddings from medical knowledge graphs and enhancing our understanding of complex relationships among medical entities, while addressing the inadequacies of existing approaches.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.