Abstract

In an age overflowing with information, the task of converting unstructured data into structured data are a vital task of great need. Currently, most relation extraction modules are more focused on the extraction of local mention-level relations—usually from short volumes of text. However, in most cases, the most vital and important relations are those that are described in length and detail. In this research, we propose GREG: A Global level Relation Extractor model using knowledge graph embeddings for document-level inputs. The model uses vector representations of mention-level ‘local’ relation’s to construct knowledge graphs that can represent the input document. The knowledge graph is then used to predict global level relations from documents or large bodies of text. The proposed model is largely divided into two modules which are synchronized during their training. Thus, each of the model’s modules is designed to deal with local relations and global relations separately. This allows the model to avoid the problem of struggling against loss of information due to too much information crunched into smaller sized representations when attempting global level relation extraction. Through evaluation, we have shown that the proposed model yields high performances in both predicting global level relations and local level relations consistently.

Highlights

  • Several machine learning applications, such as Natural Language Processing, are rapidly increasing its reliance on deep learning due to their high accuracy, performance, and adaptability [1].Which resulted in the importance and volume of data to grow exponentially

  • Our first experiment attempts to confirm whether the synchronization process does or does not hinder the local relation extraction module’s performance

  • We have presented a Global level Relation Extraction model with knowledge graph embedding for document-level inputs

Read more

Summary

Introduction

Several machine learning applications, such as Natural Language Processing, are rapidly increasing its reliance on deep learning due to their high accuracy, performance, and adaptability [1]. One of the most popular methods of converting unstructured data into a structured format is by relation extraction. While several previous attempts performed toward smaller sets of text data have shown promising results, performances against larger sets of data tend to suffer This shortcoming is especially easy to spot when having to identify global mention-level relations. By constructing a knowledge graph from the input document, the constructed knowledge graph is used to extract the input document’s global relations through graph embedding To do this, both modules go through a synchronization process. We achieved this by training the two modules in an alternating fashion This allows the proposed model to extract and classify local relations while concerning its possible global relations.

Relation Extraction
Knowledge Graph Embedding
Context-Aware Relation Extraction
Knowledge Graph Constructor
Synchronization Process
Experiment and Results
Effects of Synchronization
Model Performance Comparison
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call