Abstract
Event representing methods usually use external knowledge to construct potentially more useful positive or negative samples, while neglecting the importance of sampling them during the training process. In this paper, we propose a prototype based negative sampling method to sample useful negatives and better learn the semantic classes. This sampling method has theoretical generalization guarantees and handles false negatives in a natural way. Furthermore, we apply a parametric augmentation strategy to generate positive pairs, which can alleviate BERT embedding bias and decouple the interrelation of augmented positive pairs. Experimental results show that our unsupervised approach has a 0.8%–5.1% improvement over state-of-the-art methods on hard similarity dataset, and a 2.62% improvement on the MCNC dataset. Our proposed method provides a novel insight into negative sampling and augmentation for event representation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.