Abstract

Research on Open Information Extraction (Open IE) has made great progress in recent years; it is the task that detects a group of structured, machine-readable statements usually represented in triple form or n-ary relation statements. Open IE is among the core areas of the territory of Natural Language Processing (NLP), and these extractions decompose grammatically complex sentences in a corpus into the relationships they represent, which can be leveraged for various downstream tasks. Even though a lot of work has been done in this direction, there are still many issues with the existing strategies. Most of the previous Open IE systems employ a group of artificially constructed patterns to detect and extract relational tuples from a sentence in a corpus, and these patterns are either automatically learned from annotated training examples or hand-crafted. Such an approach faces some issues, the first is that it requires a lot of manpower. Secondly, they used many NLP tools, therefore, error accumulation in the procedure can negatively impact the results. In this paper, we propose an Open IE approach based on the Transformer architecture. To verify our approach, we make a study using a large and public benchmark dataset, and the experimental results showed that our model achieves a better performance than many existing baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call