Abstract

Relation classification aims to predict a relation between two entities in a sentence. The existing methods regard all relations as the candidate relations for the two entities in a sentence. These methods neglect the restrictions on candidate relations by entity types, which leads to some inappropriate relations being candidate relations. In this paper, we propose a novel paradigm, RElation Classification with ENtity Type restriction (RECENT), which exploits entity types to restrict candidate relations. Specially, the mutual restrictions of relations and entity types are formalized and introduced into relation classification. Besides, the proposed paradigm, RECENT, is model-agnostic. Based on two representative models GCN and SpanBERT respectively, RECENT_GCN and RECENT_SpanBERT are trained in RECENT. Experimental results on a standard dataset indicate that RECENT improves the performance of GCN and SpanBERT by 6.9 and 4.4 F1 points, respectively. Especially, RECENT_SpanBERT achieves a new state-of-the-art on TACRED.

Highlights

  • Relation classification, a supervised version of relation extraction, aims to predict a relation between two entities in a sentence

  • Pretrained language models (Devlin et al, 2019; Baldini Soares et al, 2019; Joshi et al, 2020) achieve good performance in relation classification since they are pretrained on massive corpora

  • The proposed paradigm RECENT is evaluated on TACRED3 (Zhang et al, 2017)

Read more

Summary

Introduction

A supervised version of relation extraction, aims to predict a relation between two entities in a sentence. The majority of methods make use of various neural network architectures to learn a fixed-size representation for a sentence and its entities with various language features, such as part of speech (POS), entity types, and dependency trees. Dependency trees that are parsed from sentences are exploited by GCN (Kipf and Welling, 2017) to model sentences (Zhang et al, 2018; Guo et al, 2019). As a sequence of words, a sentence is modeled by LSTM (Hochreiter and Schmidhuber, 1997) and its entity positions are involved with the attention mechanism (Zhang et al, 2017). Pretrained language models (Devlin et al, 2019; Baldini Soares et al, 2019; Joshi et al, 2020) achieve good performance in relation classification since they are pretrained on massive corpora

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.