Abstract
Event representing methods usually use external knowledge to construct potentially more useful positive or negative samples, while neglecting the importance of sampling them during the training process. In this paper, we propose a prototype based negative sampling method to sample useful negatives and better learn the semantic classes. This sampling method has theoretical generalization guarantees and handles false negatives in a natural way. Furthermore, we apply a parametric augmentation strategy to generate positive pairs, which can alleviate BERT embedding bias and decouple the interrelation of augmented positive pairs. Experimental results show that our unsupervised approach has a 0.8%–5.1% improvement over state-of-the-art methods on hard similarity dataset, and a 2.62% improvement on the MCNC dataset. Our proposed method provides a novel insight into negative sampling and augmentation for event representation.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have