Abstract

Event extraction is an important research direction in the field of natural language processing (NLP) applications including information retrieval (IR). Traditional event extraction is realized with two methods: the pipeline and the joint extraction methods. The pipeline method determines the event by triggering word recognition to further implement event extraction and is prone to error cascading. The joint extraction method applies deep learning to implement the completion of the trigger word and the argument role classification task. Most studies with the joint extraction method adopt the CNN or RNN network structure. However, in the case of event extraction, deeper understanding of complex contexts is required. Existing studies do not make full use of syntactic relations. This paper proposes a novel event extraction model, which is built upon a Tree-LSTM network and a Bi-GRU network and carries syntactically related information. It is illustrated that this method simultaneously uses Tree-LSTM and Bi-GRU to obtain a representation of the candidate event sentence and identify the event type, which results in a better performance compared to the ones that use chain structured LSTM, CNN or only Tree-LSTM. Finally, the hidden state of each node is used in Tree-LSTM to predict a label for candidate arguments and identify/classify all arguments of an event. Lab results show that the proposed event extraction model achieves competitive results compared to previous works.

Highlights

  • The concept of ‘‘event’’ is gradually adopted in the field of knowledge such as artificial intelligence and information retrieval

  • In order to address these research questions, we propose a model that directly identifies the type of event, and in this model the labeling of the trigger word is a by-product of event type identification

  • MAIN FRAMEWORK We introduce an event extraction model based on Tree-LSTM and Bi-GRU to improve the performance of event extraction

Read more

Summary

INTRODUCTION

The concept of ‘‘event’’ is gradually adopted in the field of knowledge such as artificial intelligence and information retrieval. In order to obtain more syntax and context information, a dependency-Tree-LSTM network is trained to obtain sentence embedded representations to highlight the role of input dependencies in event extraction tasks. The model obtains the context and syntactic information from the representation of the event sentence through two neural network models In this task, we need to classify sentences that contain events and those do not and identify the event type. After the event type is determined, the task of trigger word and argument role classification are completed according to the hidden state of root node of each subtree in the Tree-LSTM. The traditional LSTM model used in event detection can only simultaneously model the representation of a given word with the context information both preceding and following the word. The input of Part II is composed of the embedding of all words in the candidate event sentence and context

MODULE OF SENTENCE EMBEDDING BASED ON Tree-LSTM
MODULE OF SENTENCE CONTEXT EMBEDDING BASED ON Bi-GRU
EVENT EXTRACTION
Findings
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.