Abstract

In natural language, a group of words constitute a phrase and several phrases constitute a sentence. However, existing transformer-based models for sentence-level tasks abstract sentence-level semantics from word-level semantics directly, which override phrase-level semantics so that they may be not favorable for capturing more precise semantics. In order to resolve this problem, we propose a novel multi-granularity semantic representation (MGSR) model for relation extraction. This model can bridge the semantic gap between low-level semantic abstraction and high-level semantic abstraction by learning word-level, phrase-level, and sentence-level multi-granularity semantic representations successively. We segment a sentence into entity chunks and context chunks according to an entity pair. Thus, the sentence is represented as a non-empty segmentation set. The entity chunks are noun phrases, and the context chunks contain the key phrases expressing semantic relations. Then, the MGSR model utilizes inter-word, inner-chunk and inter-chunk three kinds of different self-attention mechanisms, respectively, to learn the multi-granularity semantic representations. The experiments on two standard datasets demonstrate our model outperforms the previous models.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.