Abstract

Temporal relation extraction aims to infer the temporal order of either two events in the document. Because of the nature of events in real life, severe imbalanced temporal relation classes exist in the temporal relation extraction task. Even though various methods have been proposed to improve the overall performance, the accuracy of these methods on minority temporal classes is limited. In this work, we present a contrastive prototypical learning architecture to address this problem, which explicitly models the spatial similarity between instances in the embedding space so that instances from minority classes can be distinguished from the large classes. To make it compatible with current temporal relation extraction settings, we propose a novel sampling memory queue-based method so that the architecture can be applied to a limited batch size scenario. We further design a context encoding layer that incorporates both contextualized information and linguistic features such as tense information and dependency. Our extensive experiments on TimeBank-Dense, TDDiscourse, and MATRES datasets demonstrate that our model can significantly improve the performance of minority relation classes and, therefore increase the overall learning ability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call