Abstract

Semantic relation extraction is crucial to automatically constructing a knowledge graph (KG), and it supports a variety of downstream natural language processing (NLP) tasks such as query answering (QA), semantic search and textual entailment. In addition, the semantic relation extraction task is mainly responsible for identifying entity pairs from raw texts and extracting the semantic relations between the extracted entity pairs. Existing methods consider only lexical-level features and often ignore syntactic features, resulting in poor relation extraction performance. By analyzing the necessity of the syntactic dependency and the contributions of words in a sentence to relation extraction, this paper proposes an end-to-end method that uses bidirectional tree-structured long short-term memory (LSTM) to extract structural features based on the dependency tree of a sentence. To enhance the performance of the relation extraction, the bidirectional sequential LSTM with attention is used to identify word-based features including the positional information of entity pairs and the contribution of words. Then, structural features and word-based features are concatenated to optimize the relation extraction performance. Finally, the proposed method is used on the SemEval 2010 task 8 and the CoNLL04 datasets to validate its performance. The experimental results show that the proposed method achieves state-of-the-art results on the SemEval 2010 task 8 and the CoNLL04 datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call