Abstract

Joint extraction of entities and relations is to detect entities and recognize semantic relations simultaneously. However, some existing joint models predict relations on words, instead of entities. These models cannot make full use of the entity information when predicting relations, which will affect relation extraction. We propose an end-to-end model with a double-pointer module that can jointly extract whole entities and relations. The double-pointer module is combined with multiple decoders to predict the start and end positions of the entity in the input sentence. In addition, in order to learn the relevance between long-distance entities effectively, the multi-layer convolution and self-attention mechanism are used as an encoder, instead of Bi-RNN. We conduct experiments on two public datasets and our models outperform the baseline methods significantly.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.