Abstract

Document-level event argument linking aims to find global event arguments to fill an event’s semantic role, which is a challenging task owing to the appearance of long contexts and the issue of data sparsity. In this paper, we study a new formulation to address the above challenges in document-level EAL, by explicitly framing the task as a machine reading comprehension (MRC) problem. In this formulation, argument extraction is viewed as a question answering procedure. To better transfer each semantic role into a question, we propose a back-translation based query generation method, which can effectively generate well-formed questions without adopting huge human effort. Moreover, to better capture the non-local dependencies between triggers and arguments, we devise a dependency-guided question answering process, which can explore the underlying structure of the document to boost learning. The extensive experiments on a benchmark have justified the effectiveness of our approach. Particularity, our approach achieves substantially improvement over previous methods, leading to +5.7% in F1 in the full argument linking setting. Moreover, our approach is particular data-efficient and demonstrates superior performance in the data-low scenario with limited training data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call