Abstract

Biomedical texts are relatively obscure in describing relations between specialized entities, and the automatic extraction of drug–drug or drug–disease relations from massive biomedical texts presents a challenge faced by many researchers. To this end, this paper designs a relation extraction method based on dependency information fusion to improve the predictive power of the model for the relations between given biomedical entities. Firstly, we propose a local–global pruning strategy for the dependency syntax tree. Next, we propose the construction of a dependency type matrix for the pruned dependency tree to incorporate sentence dependency information into the model to feature extraction. We then incorporate attention mechanism into the graph convolutional model by calculating the attention weights of word–word dependencies, thus improving the traditional graph convolutional network. The model distinguishes the importance of different dependency information by attention weights, thus weakening the influence of interfering information such as word-to-word dependencies that are unrelated to entities in long sentences. In this paper, our proposed Dependency Information Fusion Attention Graph Convolutional Network (DIF-A-GCN) is evaluated on two biomedical datasets, DDI and CIVIC. The experimental results show that our proposed method based on dependency information fusion outperforms current state-of-the-art biomedical relation extraction models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call