Abstract

Event argument extraction (EAE) aims to identify the arguments of an event and classify the roles that those arguments play. Despite great efforts made in prior work, there remain many challenges: (1) Data scarcity. (2) Capturing the long-range dependency, specifically, the connection between an event trigger and a distant event argument. (3) Integrating event trigger information into candidate argument representation. For (1), we explore using unlabeled data. For (2), we use Transformer that uses dependency parses to guide the attention mechanism. For (3), we propose a trigger-aware sequence encoder with several types of trigger-dependent sequence representations. We also support argument extraction either from text annotated with gold entities or from plain text. Experiments on the English ACE 2005 benchmark show that our approach achieves a new state-of-the-art.

Highlights

  • Event argument extraction (EAE) aims to identify the entities that serve as arguments of an event and to classify the specific roles they play

  • Inspired by Strubell et al (2018), we utilize dependency parses5 by modifying an attention head for each layer in a Transformer. Note that this Transformer is different from the BERT component, as this Transformer aims to capture long-range dependency on top of the trigger-aware representations learned from our sequence encoder

  • We use Adam (Kingma and Ba, 2014) as optimizer and batch size of 32 for both main task EAE and the auxiliary task of trigger detection; we alternate between batches of main task and auxiliary task with probabilities of 0.9 and 0.1, respectively

Read more

Summary

Introduction

Event argument extraction (EAE) aims to identify the entities that serve as arguments of an event and to classify the specific roles they play. For the event trigger “injured”, “two soldiers” and “yesterday” play the role Victim and INJURY Time, respectively. (2) We use (unlabeled) in-domain data to adapt the BERT model parameters in a subsequent pretraining step as in (Gururangan et al, 2020) A crucial aspect for EAE is to integrate event trigger information into the learned representations. This is important because arguments are dependent on triggers, i.e., the same argument span plays completely different roles toward different triggers. C 2020 Association for Computational Linguistics existing work that relies on regular sequence encoders, we design a novel trigger-aware encoder which simultaneously learns four different types of trigger-informed sequence representations. Our model achieves the new state-of-the-art on ACE2005 Events data (Grishman et al, 2005)

Task Setup
Modeling Argument Extraction
Training Regimes for Data Scarcity
Experiments
Results and Analyses
Conclusion
Related Work
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call