Abstract

Relation extraction is a cardinal natural language processing task employed to process a given corpus and infer hidden connections and relations between real-world objects. Most contemporary research works employ state-of-the-art language models and techniques for determining the type of relationship existing between a pair of entities in a given sentence but are computationally expensive and fail to identify or match the entities present in the sentence as a single task, rather they breakdown the problem into subtasks or rely on a multi-module framework requiring multiple propagations through the network. The paper presents a novel methodology for extracting relations between multiple pairs of entities present in the sentence and performs the relation classification task. The proposed methodology employs a graph neural network, and in contrast to the existing research, the proposed mechanism makes use of a hybrid attention mechanism to dynamically optimize the graph edges to capture the relevant details in the network graph to aid as an attention mechanism to aid in faster computation, compared to the more typical transformer-based networks that overwhelm CPU-based systems. The paper also studies the effect of the number of hop transformations on the graph and other hyper-parameters controlling the input sentence representation. The proposed model architecture achieves a macro avg. F1 score of 86.2 on the SemEval 2010 relation extraction dataset, with further room for improvement.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call