Abstract

Most biomedical information extraction efforts are focused on binary relations, there is a strong need to extract drug–gene–mutation n-ary relations among cross-sentences. In recent years, end-to-end biomedical relation extraction with sequence-based or dependency-based method has gained increasing attention. However, handling global dependencies and structural information remains challenges for sequence-based and dependency-based models. Joint exploitation of sequence and graph information may improve biomedical cross-sentence relation extraction. In this paper, we present a hybrid model for extracting biomedical relation in a cross-sentence which aims to address these problems. Our models rely on the self-attention mechanism that directly draws the global dependency relation of the sentence. Furthermore, to preserve the dependency structural information between the words that contain the syntactic dependency relations, we employ graph convolutional networks that encode the dependency structural information to guide the multihead attention to learn the dependency relation. Through extensive experiments on benchmark datasets, we demonstrated the effectiveness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call