Abstract

Named entity recognition and relation extraction are two important tasks in information extraction. Many recent works model two tasks jointly and achieve great success. However, these methods still suffer from the relation semantic insufficiency, head entity dependency and nested entity detection problem. To address the challenges, we propose a relation-aware span-level transformer network (RSTN), which contains a span-level encoder for entity recognition and a non-autoregressive decoder for relation extraction. Specifically, we generate explicit represen-tations for possible spans to extract overlapping entities in our span-level encoder. In addition, we encode relation semantics in our non-autoregressive decoder, and exploit copy mechanism to extract head entities and tail entities simultaneously by modifying the casual attention mask. Through span-level multi-head attention mechanism, we enhance the interaction between entity recognition and relation extraction in our model. We evaluate our model on three public datasets: ACE05, ADE and SciERC. Experiment results show that the proposed model outperforms previous strong baseline methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.