Abstract

The entity-relationship extraction model has a significant influence in relation extraction. The existing model cannot effectively identify the entity-relationship triples in overlapping relationships. It also has the problem of long-distance dependencies between entities. In this paper, an inter span learning for document-level relation extraction model is proposed. Firstly, the model converts input of the BERT pre-training model into word vectors. Secondly, it divides the word vectors to form span sequences by random initial span and uses convolutional neural networks to extract entity information in the span sequences. Dividing the word vector into span sequences can divide the entity pairs that may have overlapping relationships into the same span sequence, partially solving the overlapping relationship problem. Thirdly, the model uses inter span learning to obtain entity information in different span sequences. It fuses entity type features and uses Softmax regression to achieve entity recognition. Aiming at solving the problem of long-distance dependence between entities, inter span learning can fuse the information in different span sequences. Finally, it fuses text information and relationship type features, and uses Linear Layer to classify relationships. Experiments demonstrate that the model improves the F1-score of the DocRED dataset by 2.74% when compared to the baseline model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call