Abstract

As a crucial component of many natural language processing tasks, extracting entities and relations transforms unstructured text information into structured data, providing essential support for constructing knowledge graphs (KGs). However, current entity relation extraction models often prioritize the extraction of richer semantic features or the optimization of relation extraction methods, overlooking the significance of positional information and subject characteristics in this task. To solve this problem, we introduce the subject position-based complex exponential embedding for entity relation extraction model (SPECE). The encoder module of this model ingeniously combines a randomly initialized dilated convolutional network with a BERT encoder. Notably, it determines the initial position of the predicted subject based on semantic cues. Furthermore, it achieves a harmonious integration of positional encoding features and textual features through the adoption of the complex exponential embedding method. The experimental outcomes on both the NYT and WebNLG datasets reveal that, when compared to other baseline models, our proposed SPECE model demonstrates significant improvements in the F1 score on both datasets. This further validates its efficacy and superiority.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.