Abstract

Most previous supervised event extraction methods have relied on features derived from manual annotations, and thus cannot be applied to new event types without extra annotation effort. We take a fresh look at event extraction and model it as a generic grounding problem: mapping each event mention to a specific type in a target event ontology. We design a transferable architecture of structural and compositional neural networks to jointly represent and map event mentions and types into a shared semantic space. Based on this new framework, we can select, for each event mention, the event type which is semantically closest in this space as its type. By leveraging manual annotations available for a small set of existing event types, our framework can be applied to new unseen event types without additional manual annotations. When tested on 23 unseen event types, our zero-shot framework, without manual annotations, achieved performance comparable to a supervised model trained from 3,000 sentences annotated with 500 event mentions.

Highlights

  • The goal of event extraction is to identify event triggers and their arguments in unstructured text data, and to assign an event type to each trigger and a semantic role to each argument

  • We propose to enrich the semantic representations of each event mention and event type with rich structures, and determine the type based on the semantic similarity between an event mention and each event type defined in a target ontology

  • Without any annotated mentions on the 23 unseen test event types in its training set, our transfer learning approach achieved performance comparable to that of the LSTM, which was trained on 3,000 sentences5 with 500 annotated event mentions

Read more

Summary

Introduction

The goal of event extraction is to identify event triggers and their arguments in unstructured text data, and to assign an event type to each trigger and a semantic role to each argument. To make event extraction effective as new realworld scenarios emerge, we take a look at this task from the perspective of zero-shot learning, ZSL (Frome et al, 2013; Norouzi et al, 2013; Socher et al, 2013a). ZSL, as a type of transfer learning, makes use of separate, pre-existing classifiers to build a semantic, cross-concept space that maps between their respective classes. The resulting shared semantic space allows for building a novel “zero-shot” classifier, i,e,, requiring no (zero) additional training examples, to handle unseen cases. The Government of China has ruled Tibet since 1951 after dispatching troops to the Himalayan region in 1950

Objectives
Methods
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call